🤖 AI Summary
This work proposes the first unified architecture for video-based human perception based on a pretrained text-to-video diffusion model, addressing the challenge that existing methods rely on multiple specialized models and struggle to jointly handle both dense tasks (e.g., depth, surface normals, segmentation, dense pose) and sparse tasks (e.g., 2D/3D keypoints). The approach introduces learnable tokens for sparse prediction and leverages textual prompts to dynamically modulate multi-task inference within a single forward pass. Trained exclusively on synthetic data without any real-world annotations or task-specific fine-tuning, the method achieves performance on par with or superior to dedicated models across multiple benchmarks. Furthermore, it demonstrates strong zero-shot generalization to multiple humans, anthropomorphic characters, and animals.
📝 Abstract
We present THFM, a unified video foundation model for human-centric perception that jointly addresses dense tasks (depth, normals, segmentation, dense pose) and sparse tasks (2d/3d keypoint estimation) within a single architecture. THFM is derived from a pretrained text-to-video diffusion model, repurposed as a single-forward-pass perception model and augmented with learnable tokens for sparse predictions. Modulated by the text prompt, our single unified model is capable of performing various perception tasks. Crucially, our model is on-par or surpassing state-of-the-art specialized models on a variety of benchmarks despite being trained exclusively on synthetic data (i.e.~without training on real-world or benchmark specific data). We further highlight intriguing emergent properties of our model, which we attribute to the underlying diffusion-based video representation. For example, our model trained on videos with a single human in the scene generalizes to multiple humans and other object classes such as anthropomorphic characters and animals -- a capability that hasn't been demonstrated in the past.