Rebuttal Demos for Real3D-Portrait

Rebuttal Demo 1: Motion Adapter versus Deformation Field

To better demonatrate the superiority of motion adapter to morph the 3D face, we provide a demo that visualizes the depth and color image of the deform-based model (replace the motion adapter with HiDe-NeRF's deformation field) and our motion adapter-based model when driven by an audio.

Rebuttal Demo 2: I2P model performing Multi-View Synthesis

To better demonatrate that our image-to-plane (I2P) model could reconstruct the 3D face mesh given the source image, we extract the I2P model form the Real3D-Portrait's final checkpoint, and directly volume renders the canonical tri-planes produced by the I2P model.

Rebuttal Demo 3: Real3D-Portrait generalizes well with Diffcult Source Images

In the following video, we should that our image-to-plane model and motion adapter could well handles source image with arbitrary expression. For instance, large-opened mouth, lowered jay, and closed eyes.

Post-Rebuttal Demo 1: Mesh Visualization between Real3D-Portrait and Deformation Field

To better compare the predicted geometry (both depth and surface normals), we follow the instructions from EG3D in this link . We can see that our method with I2P model and motion adapter could reconstruct good geometry while the deformation field

Post-Rebuttal Demo 2: Lowered yaw expression source image

In the following demo, we show that our I2P model and Motion Adapter generalize well to hard source images with a lowered yaw expression.

BibTeX

@inproceedings{ye2024real3dportrait,
    author    = {Ye, Zhenhui and Zhong, Tianyun and Ren, Yi and Yang, Jiaqi and Li, Weichuang and Huang, Jiangwei and Jiang, Ziyue and He, Jinzheng and Huang, Rongjie and Liu, Jinglin and Zhang, Chen and Yin, Xiang and Ma, Zejun and Zhao, Zhou},
    title     = {Real3D-Portrait: One-shot Realistic 3D Talking Portrait Synthesis},
    journal   = {ICLR},
    year      = {2024},
  }