While recent advances in deep neural networks have made it possible to render high-quality images, generating photo-realistic and personalized talking head remains challenging. With the given audio, the key to tackling this task is synchronizing lip movement and simultaneously generating personalized attributes like head movement and eye blink. In this work, we observe that the input audio is highly correlated to lip motion while less correlated to other personalized attributes (e.g., head movements). Inspired by this, we propose a novel framework based on neural radiance field to pursue high-fidelity and personalized talking head generation. Specifically, neural radiance field takes lip movement features and personalized attributes as two disentangled conditions, where lip movements are directly predicted from the audio inputs to achieve lip-synchronized generation. In the meantime, personalized attributes are sampled from a probabilistic model, where we design a Transformer-based variational autoencoder sampled from Gaussian Process to learn plausible and natural-looking head poses and eye blinks. Experiments on several benchmarks demonstrate that our method achieves significantly better results than state-of-the-art methods.
Pipeline of our method
More synthesized videos
More synthesized videos
@article{yao2022dfa,
title={DFA-NeRF: Personalized Talking Head Generation via Disentangled Face Attributes Neural Rendering},
author={Yao, Shunyu and Zhong, RuiZhe and Yan, Yichao and Zhai, Guangtao and Yang, Xiaokang},
journal={arXiv preprint arXiv:2201.00791},
year={2022}
}