🤖 AI Summary
Multimodal large language models (MLLMs) exhibit systematic biases in orientation understanding due to inconsistent perspective annotations in training data—particularly the conflation of absolute and relative directional references. To address this, we propose egocentric instruction tuning, grounded in the user’s first-person perspective. We introduce the first standardized egocentric orientation annotation schema and an automated pipeline for generating high-quality, perspective-aligned instruction data. Leveraging MLLMs’ fine-grained visual perception, our method synthesizes egocentric instructions and integrates them with a unified orientation modeling framework and a three-task joint evaluation protocol. We further construct EgoOrientBench—the first cross-domain benchmark dedicated to egocentric orientation understanding. Experiments demonstrate significant improvements in orientation accuracy on EgoOrientBench, without compromising general multimodal capabilities. All code, instruction data, and the benchmark dataset are publicly released.
📝 Abstract
Multimodal large language models (MLLMs) act as essential interfaces, connecting humans with AI technologies in multimodal applications. However, current MLLMs face challenges in accurately interpreting object orientation in images due to inconsistent orientation annotations in training data, hindering the development of a coherent orientation understanding. To overcome this, we propose egocentric instruction tuning, which aligns MLLMs' orientation understanding with the user's perspective, based on a consistent annotation standard derived from the user's egocentric viewpoint. We first generate egocentric instruction data that leverages MLLMs' ability to recognize object details and applies prior knowledge for orientation understanding. Using this data, we perform instruction tuning to enhance the model's capability for accurate orientation interpretation. In addition, we introduce EgoOrientBench, a benchmark that evaluates MLLMs' orientation understanding across three tasks using images collected from diverse domains. Experimental results on this benchmark show that egocentric instruction tuning significantly improves orientation understanding without compromising overall MLLM performance. The instruction data and benchmark dataset are available on our project page at https://github.com/jhCOR/EgoOrientBench.