๐ค AI Summary
Existing action question answering (AQA) methods rely on explicit program execution and manually designed modules, limiting scalability and generalization. To address this, we propose IMoRe, an Implicit Program-guided Reasoning framework that pioneers the direct use of structured program functions for implicit reasoning. IMoRe introduces a program-guided reading mechanism to dynamically select multi-level action representations, integrated with a pre-trained action ViT backbone, program-driven attention, and an iterative memory update moduleโenabling multi-granularity feature extraction and unified handling of diverse query types. Evaluated on Babel-QA, IMoRe achieves state-of-the-art performance. Moreover, it demonstrates strong generalization on HuMMan, a newly constructed human motion AQA benchmark. Both code and the HuMMan dataset are publicly released.
๐ Abstract
Existing human motion Q&A methods rely on explicit program execution, where the requirement for manually defined functional modules may limit the scalability and adaptability. To overcome this, we propose an implicit program-guided motion reasoning (IMoRe) framework that unifies reasoning across multiple query types without manually designed modules. Unlike existing implicit reasoning approaches that infer reasoning operations from question words, our model directly conditions on structured program functions, ensuring a more precise execution of reasoning steps. Additionally, we introduce a program-guided reading mechanism, which dynamically selects multi-level motion representations from a pretrained motion Vision Transformer (ViT), capturing both high-level semantics and fine-grained motion cues. The reasoning module iteratively refines memory representations, leveraging structured program functions to extract relevant information for different query types. Our model achieves state-of-the-art performance on Babel-QA and generalizes to a newly constructed motion Q&A dataset based on HuMMan, demonstrating its adaptability across different motion reasoning datasets. Code and dataset are available at: https://github.com/LUNAProject22/IMoRe.