DICE: End-to-end Deformation Capture of Hand-Face Interactions from a Single Image

#Corresponding Authors.
Arxiv 2024.

Abstract


Reconstructing 3D hand-face interactions with deformations from a single image is a challenging yet crucial task with broad applications in AR, VR, and gaming. The challenges stem from self-occlusions during single-view hand-face interactions, diverse spatial relationships between hands and face, complex deformations, and the ambiguity of the single-view setting. The first and only method for hand-face interaction recovery, Decaf, introduces a global fitting optimization guided by contact and deformation estimation networks trained on studio-collected data with 3D annotations. However, Decaf suffers from a time-consuming optimization process and limited generalization capability due to its reliance on 3D annotations of hand-face interaction data. To address these issues, we present DICE, the first end-to-end method for Deformation-aware hand-face Interaction reCovEry from a single image. DICE estimates the poses of hands and faces, contacts, and deformations simultaneously using a Transformer-based architecture. It features disentangling the regression of local deformation fields and global mesh vertex locations into two network branches, enhancing deformation and contact estimation for precise and robust hand-face mesh recovery. To improve generalizability, we propose a weakly-supervised training approach that augments the training set using in-the-wild images without 3D ground-truth annotations, employing the depths of 2D keypoints estimated by off-the-shelf models and adversarial priors of poses for supervision. Our experiments demonstrate that DICE achieves state-of-the-art performance on a standard benchmark and in-the-wild data in terms of accuracy and physical plausibility. Additionally, our method operates at an interactive rate (20 fps) on an Nvidia 4090 GPU, whereas Decaf requires more than 15 seconds for a single image. Our code will be publicly available upon publication.


Our method, DICE, is the first end-to-end approach that captures hand-face interaction and deformation from a monocular image. (a) Decaf validation dataset. (b) In-the-wild images. (c) Use-cases in VR.

Framework


Overview of the proposed DICE framework. The input image is first fed to a CNN to extract a feature map, which is then passed to the Transformer-based encoders for mesh and interaction, i.e., MeshNet and InteractionNet. The MeshNet extracts hand and face mesh features, which is then used by the Inverse Kinematics models (IKNets) to predict pose and shape parameters that drive FLAME and MANO models. The InteractionNet predicts per-vertex hand-face contact probabilities and face deform fields from the feature map, the latter is applied to the face mesh output by the FLAME model. To improve the generalization capability, we introduce a Weakly-Supervised Training Scheme using off-the-shelf 2D keypoint detection models and depth estimation models to provide depth supervision on keypoints. In addition, we use head and hand discriminators to constrain the distribution of parameters regressed by IKNets.


Hand-Face Interaction, Deformation, and Contact Recovery


Qualitative results of hand-face interaction, deformation, and contact recovery by DICE on Decaf and in-the-wild images. In contact visualizations, a deeper color indicates a higher contact probability.


Comparisons with Previous Methods


Qualitative comparsion of DICE, Decaf, PIXIE, METRO* on Decaf validation set and in-the-wild images. Our method achieves superior reconstruction accuracy and plausibility in the Decaf dataset, while generalizing well to difficult in-the-wild actions unseen in Decaf.



Comparison with the SOTA methods for hand-face interaction and deformation recovery on Decaf.




Comparison with Decaf for hand-face contact estimation results on Decaf dataset.



Check out our paper for more details.

Citation

@article{wu2024dice,
  title={DICE: End-to-end Deformation Capture of Hand-Face Interactions from a Single Image},
  author={Wu, Qingxuan and Dou, Zhiyang and Xu, Sirui and Shimada, Soshi and Wang, Chen and Yu, Zhengming and Liu, Yuan and Lin, Cheng and Cao, Zeyu and Komura, Taku and others},
  journal={arXiv preprint arXiv:2406.17988},
  year={2024}
}

This page is Zotero translator friendly. Page last updated Imprint. Data Protection.