🤖 AI Summary
This work addresses unconditional joint generation of human motion and 3D body shape. Methodologically, it proposes a score-based diffusion framework that eliminates reliance on kinematic priors or post-hoc mesh reconstruction. It avoids over-parameterized inputs and auxiliary losses, employing only standard L2 score matching. A kinematics-aware feature-space normalization is introduced, and loss weights are analytically derived to ensure training dynamic balance. All modules are grounded in theoretical analysis to guarantee stability and generalization. Experiments demonstrate state-of-the-art performance on unconditional motion generation. Moreover, the method achieves— for the first time—high-fidelity, temporally coherent 3D body shape generation directly from diffusion sampling, significantly outperforming conventional two-stage paradigms that first predict keypoints and then reconstruct meshes.
📝 Abstract
Recent work has explored a range of model families for human motion generation, including Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs), and diffusion-based models. Despite their differences, many methods rely on over-parameterized input features and auxiliary losses to improve empirical results. These strategies should not be strictly necessary for diffusion models to match the human motion distribution. We show that on par with state-of-the-art results in unconditional human motion generation are achievable with a score-based diffusion model using only careful feature-space normalization and analytically derived weightings for the standard L2 score-matching loss, while generating both motion and shape directly, thereby avoiding slow post hoc shape recovery from joints. We build the method step by step, with a clear theoretical motivation for each component, and provide targeted ablations demonstrating the effectiveness of each proposed addition in isolation.