🤖 AI Summary
In interactive applications, virtual characters struggle to generate natural, real-time full-body responses in dynamic environments—particularly for obstacle avoidance and group interactions—due to decoupled motion-trajectory representations, reliance on manual annotation, and insufficient kinematic naturalness from motion capture data. To address this, we propose an environment-aware bidirectional motion matching framework that explicitly models the coupling between pose and trajectory within motion matching for the first time. Our method enables end-to-end automatic obstacle avoidance and multi-agent coordinated animation generation. It jointly extracts shape, pose, and trajectory features from motion capture data, augmented by collision-aware retrieval optimization and real-time matching. Experiments demonstrate that our system generates highly natural and temporally consistent full-body animations in crowded scenes, significantly improving environmental adaptability and behavioral plausibility.
📝 Abstract
Interactive applications demand believable characters that respond naturally to dynamic environments. Traditional character animation techniques often struggle to handle arbitrary situations, leading to a growing trend of dynamically selecting motion-captured animations based on predefined features. While Motion Matching has proven effective for locomotion by aligning to target trajectories, animating environment interactions and crowd behaviors remains challenging due to the need to consider surrounding elements. Existing approaches often involve manual setup or lack the naturalism of motion capture. Furthermore, in crowd animation, body animation is frequently treated as a separate process from trajectory planning, leading to inconsistencies between body pose and root motion. To address these limitations, we present Environment-aware Motion Matching, a novel real-time system for full-body character animation that dynamically adapts to obstacles and other agents, emphasizing the bidirectional relationship between pose and trajectory. In a preprocessing step, we extract shape, pose, and trajectory features from a motion capture database. At runtime, we perform an efficient search that matches user input and current pose while penalizing collisions with a dynamic environment. Our method allows characters to naturally adjust their pose and trajectory to navigate crowded scenes.