diff --git a/index.html b/index.html index f3fc572..24d3207 100644 --- a/index.html +++ b/index.html @@ -162,7 +162,7 @@

PEGASUS: Personalized Generative 3D Ava -
+ +
@@ -245,31 +245,20 @@

Abstract

- We present the first method capable of photorealistically reconstructing a non-rigidly - deforming scene using photos/videos captured casually from mobile phones. + We present, "PEGASUS", a method for constructing personalized generative 3D face avatars from monocular video sources.

- Our approach augments neural radiance fields - (NeRF) by optimizing an - additional continuous volumetric deformation field that warps each observed point into a - canonical 5D NeRF. - We observe that these NeRF-like deformation fields are prone to local minima, and - propose a coarse-to-fine optimization method for coordinate-based models that allows for - more robust optimization. - By adapting principles from geometry processing and physical simulation to NeRF-like - models, we propose an elastic regularization of the deformation field that further - improves robustness. + As a compositional generative model, + our model enables disentangled controls to selectively alter the facial attributes (e.g., hair or nose) of the target individual, + while preserving the identity. We present two key approaches to achieve this goal.

- We show that Nerfies can turn casually captured selfie - photos/videos into deformable NeRF - models that allow for photorealistic renderings of the subject from arbitrary - viewpoints, which we dub "nerfies". We evaluate our method by collecting data - using a - rig with two mobile phones that take time-synchronized photos, yielding train/validation - images of the same pose at different viewpoints. We show that our method faithfully - reconstructs non-rigidly deforming scenes and reproduces unseen views with high - fidelity. + First, we present a method to construct a person-specific generative 3D avatar + by building a synthetic video collection of the target identity with varying facial attributes, + where the videos are synthesized by borrowing parts from diverse individuals from other monocular videos. + Through several experiments, we demonstrate the superior performance of our approach by generating unseen attributes with high realism. + Subsequently, we introduce a zero-shot approach to achieve the same generative modeling more efficiently + by leveraging a previously constructed personalized generative model.