🤖 AI Summary
This work addresses the limitations of existing text-to-action retrieval methods, which rely on global embeddings and struggle to capture fine-grained local correspondences, thereby constraining both retrieval accuracy and interpretability. To overcome this, the authors propose a structured pseudo-image representation derived from joint angle sequences, integrated with a pretrained vision Transformer. They further introduce an enhanced token-patch late interaction mechanism to enable fine-grained, interpretable bidirectional alignment between text and action. By incorporating Masked Language Modeling as a regularization objective, the method significantly outperforms state-of-the-art approaches on the HumanML3D and KIT-ML datasets, achieving higher retrieval accuracy while enabling visual analysis of localized semantic correspondences.
📝 Abstract
Text-motion retrieval aims to learn a semantically aligned latent space between natural language descriptions and 3D human motion skeleton sequences, enabling bidirectional search across the two modalities. Most existing methods use a dual-encoder framework that compresses motion and text into global embeddings, discarding fine-grained local correspondences, and thus reducing accuracy. Additionally, these global-embedding methods offer limited interpretability of the retrieval results. To overcome these limitations, we propose an interpretable, joint-angle-based motion representation that maps joint-level local features into a structured pseudo-image, compatible with pre-trained Vision Transformers. For text-to-motion retrieval, we employ MaxSim, a token-wise late interaction mechanism, and enhance it with Masked Language Modeling regularization to foster robust, interpretable text-motion alignment. Extensive experiments on HumanML3D and KIT-ML show that our method outperforms state-of-the-art text-motion retrieval approaches while offering interpretable fine-grained correspondences between text and motion. The code is available in the supplementary material.