Ponimator: Unfolding Interactive Pose for Versatile Human-human Interaction Animation

📅 2025-10-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of weak contextual awareness and insufficient dynamic modeling in close-proximity two-person interactive motion generation. We propose the first spatiotemporal-prior-augmented dual-conditional diffusion framework. Methodologically, it employs a dual-branch architecture: a temporal branch models pose sequence evolution, while a spatial branch enforces interaction-structural constraints; it supports conditioning on static poses, text, or their combination—enabling image-driven, reactive, and text-to-interaction multimodal synthesis. Our key contribution is the first effective transfer of high-fidelity motion-capture interaction dynamics to open-scenario generation. Extensive evaluation across multiple benchmarks demonstrates substantial improvements in physical plausibility, temporal coherence, and interaction diversity of generated motions, with superior fidelity and generalization compared to state-of-the-art methods.

Technology Category

Application Category

📝 Abstract
Close-proximity human-human interactive poses convey rich contextual information about interaction dynamics. Given such poses, humans can intuitively infer the context and anticipate possible past and future dynamics, drawing on strong priors of human behavior. Inspired by this observation, we propose Ponimator, a simple framework anchored on proximal interactive poses for versatile interaction animation. Our training data consists of close-contact two-person poses and their surrounding temporal context from motion-capture interaction datasets. Leveraging interactive pose priors, Ponimator employs two conditional diffusion models: (1) a pose animator that uses the temporal prior to generate dynamic motion sequences from interactive poses, and (2) a pose generator that applies the spatial prior to synthesize interactive poses from a single pose, text, or both when interactive poses are unavailable. Collectively, Ponimator supports diverse tasks, including image-based interaction animation, reaction animation, and text-to-interaction synthesis, facilitating the transfer of interaction knowledge from high-quality mocap data to open-world scenarios. Empirical experiments across diverse datasets and applications demonstrate the universality of the pose prior and the effectiveness and robustness of our framework.
Problem

Research questions and friction points this paper is trying to address.

Generating dynamic human-human interaction animations from poses
Synthesizing interactive poses from single pose or text input
Transferring interaction knowledge from mocap to open-world scenarios
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses interactive pose priors for animation
Employs two conditional diffusion models
Supports diverse animation tasks from poses
🔎 Similar Papers
No similar papers found.
Shaowei Liu
Shaowei Liu
University of Illinois Urbana-Champaign
Computer VisionRobotics
C
Chuan Guo
Snap Inc.
B
Bing Zhou
Snap Inc.
J
Jian Wang
Snap Inc.