HOIAnimator: Generating Text-prompt Human-object Animations using Novel Perceptive Diffusion Models

[CVPR 2024]
Computer School, Beijing Information Science and Technology University, P.R. China State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, P.R. China Stony Brook University, New York, America
* Correspoonding Author
HOIAnimator

Abstract

To date, the quest to rapidly and effectively produce human-object interaction (HOI) animations directly from textual descriptions stands at the forefront of computer vision research. The underlying challenge demands both a discriminating interpretation of language and a comprehensive physics-centric model supporting real-world dynamics. To ameliorate, this paper advocates HOIAnimator, a novel and interactive diffusion model with perception ability and also ingeniously crafted to revolutionize the animation of complex interactions from linguistic narratives. The effectiveness of our model is anchored in two ground-breaking innovations: (1) Our Perceptive Diffusion Models (PDM) brings together two types of models: one focused on human movements and the other on objects. This combination allows for animations where humans and objects move in concert with each other, making the overall motion more realistic. Additionally, we propose a Perceptive Message Passing (PMP) mechanism to enhance the communication bridging the two models, ensuring that the animations are smooth and unified; (2) We devise an Interaction Contact Field (ICF), a sophisticated model that implicitly captures the essence of HOIs. Beyond mere predictive contact points, the ICF assesses the proximity of human and object to their respective environment, informed by a probabilistic distribution of interactions learned throughout the denoising phase. Our comprehensive evaluation showcases HOIanimator's superior ability to produce dynamic, context-aware animations that surpass existing benchmarks in text-driven animation synthesis. We will open the source codes upon the paper's acceptance.

cars peace

Acknowledgements

This paper is supported by National Natural Science Foundation of China (62102036, 62272021), Beijing Natural Science Foundation (L232102, 4222024), R\&D Program of Beijing Municipal Education Commission (KM202211232003), Beijing Science and Technology Plan Project Z231100005923039, National Key R\&D Program of China (No. 2023YFF1203803), USA NSF IIS-1715985 and USA NSF IIS-1812606 (awarded to Hong QIN).