EMDM: Efficient Motion Diffusion Model for Fast, High-Quality Human Motion Generation

Wenyang Zhou 1   Zhiyang Dou †2,5   Zeyu Cao 1   Zhouyingcheng Liao 2   Jingbo Wang 3   Wenjia Wang 2  
Yuan Liu 2   Taku Komura 2   Wenping Wang 4   Lingjie Liu 5  
†Project Lead.
Arxiv 2023.

Abstract


We introduce Efficient Motion Diffusion Model (EMDM) for fast and high-quality human motion generation. Although previous motion diffusion models have shown impressive results, they struggle to achieve fast generation while maintaining high-quality human motions. Motion latent diffusion has been proposed for efficient motion generation. However, effectively learning a latent space can be non-trivial in such a two-stage manner. Meanwhile, accelerating motion sampling by increasing the step size, e.g., DDIM, typically leads to a decline in motion quality due to the inapproximation of complex data distributions when naively increasing the step size. In this paper, we propose EMDM that allows for much fewer sample steps for fast motion generation by modeling the complex denoising distribution during multiple sampling steps. Specifically, we develop a Conditional Denoising Diffusion GAN to capture multimodal data distributions conditioned on both control signals, i.e., textual description and denoising time step. By modeling the complex data distribution, a larger sampling step size and fewer steps are achieved during motion synthesis, significantly accelerating the generation process. To effectively capture the human dynamics and reduce undesired artifacts, we employ motion geometric loss during network training, which improves the motion quality and training efficiency. As a result, EMDM achieves a remarkable speed-up at the generation stage while maintaining high-quality motion generation in terms of fidelity and diversity.


EMDM efficiently produces high-quality human motion aligned with input conditions in a short runtime. The average running time for EMDM in action-to-motion and text-to-motion tasks is 0.02s and 0.05s per sequence, respectively. For reference, the corresponding times for MDM are 2.5s and 12.3s. For visualization purposes, we set the color of the character to deepen w.r.t. the time step.

Framework


The overview of EMDM. During the training stage, we develop condition denoising diffusion GAN to capture the complex distribution of human body motion, allowing for a larger sampling step size. At the inference stage, we use the larger sampling step size for fast sampling of high-quality human motion according to the input condition.


Inference Time Costs


Overall comparison of the inference time costs on HumanML3D, KIT and HumanAct12. We compare the running time per frame v.s. the FID of four SOTA methods.

Comparisons on text-to-motion


Comparison of text-to-motion task on HumanML3D. The right arrow → means the closer to real motion, the better.



Comparison of text-to-motion task on HumanML3D. The right arrow → means the closer to real motion, the better.



Qualitative comparison of the state-of-the-art methods in text-to-motion task. We visualize motion results and real references from six text prompts. EMDM achieves the fastest motion generation while delivering high-quality motions that closely align with the text inputs.

Comparisons on action-to-motione


Comparison of action-to-motion task on HumanAct12: FID_train indicating the evaluated splits. Accuracy (ACC) for action recognition. Diversity (DIV) and MModality (MM) for generated motion diversity w.r.t each action label.



Qualitative comparison of the state-of-the-art methods on action-to-motion task.

More Results


Text-to-Motion

More qualitative results of EMDM on the task of text-to-motion.

Action-to-Motion

More qualitative results of EMDM on the task of action-to-motion.



Check out our paper for more details.

Citation

@article{zhou2023emdm,
  title={EMDM: Efficient Motion Diffusion Model for Fast, High-Quality Motion Generation},
  author={Zhou, Wenyang and Dou, Zhiyang and Cao, Zeyu and Liao, Zhouyingcheng and Wang, Jingbo and Wang, Wenjia and Liu, Yuan and Komura, Taku and Wang, Wenping and Liu, Lingjie},
  journal={arXiv preprint arXiv:2312.02256},
  year={2023}
}

This page is Zotero translator friendly. Page last updated Imprint. Data Protection.