Is 'Right' Right? Enhancing Object Orientation Understanding in Multimodal Language Models through Egocentric Instruction Tuning

📅 2024-11-24
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal large language models (MLLMs) exhibit systematic biases in orientation understanding due to inconsistent perspective annotations in training data—particularly the conflation of absolute and relative directional references. To address this, we propose egocentric instruction tuning, grounded in the user’s first-person perspective. We introduce the first standardized egocentric orientation annotation schema and an automated pipeline for generating high-quality, perspective-aligned instruction data. Leveraging MLLMs’ fine-grained visual perception, our method synthesizes egocentric instructions and integrates them with a unified orientation modeling framework and a three-task joint evaluation protocol. We further construct EgoOrientBench—the first cross-domain benchmark dedicated to egocentric orientation understanding. Experiments demonstrate significant improvements in orientation accuracy on EgoOrientBench, without compromising general multimodal capabilities. All code, instruction data, and the benchmark dataset are publicly released.

Technology Category

Application Category

📝 Abstract
Multimodal large language models (MLLMs) act as essential interfaces, connecting humans with AI technologies in multimodal applications. However, current MLLMs face challenges in accurately interpreting object orientation in images due to inconsistent orientation annotations in training data, hindering the development of a coherent orientation understanding. To overcome this, we propose egocentric instruction tuning, which aligns MLLMs' orientation understanding with the user's perspective, based on a consistent annotation standard derived from the user's egocentric viewpoint. We first generate egocentric instruction data that leverages MLLMs' ability to recognize object details and applies prior knowledge for orientation understanding. Using this data, we perform instruction tuning to enhance the model's capability for accurate orientation interpretation. In addition, we introduce EgoOrientBench, a benchmark that evaluates MLLMs' orientation understanding across three tasks using images collected from diverse domains. Experimental results on this benchmark show that egocentric instruction tuning significantly improves orientation understanding without compromising overall MLLM performance. The instruction data and benchmark dataset are available on our project page at https://github.com/jhCOR/EgoOrientBench.
Problem

Research questions and friction points this paper is trying to address.

Improving object orientation interpretation in MLLMs
Addressing inconsistent orientation annotations in training data
Aligning MLLMs' orientation understanding with user perspective
Innovation

Methods, ideas, or system contributions that make the work stand out.

Egocentric instruction tuning aligns MLLMs with user perspective
Generates orientation data using MLLMs' detail recognition
Introduces EgoOrientBench for cross-domain orientation evaluation
🔎 Similar Papers
No similar papers found.
J
Ji Hyeok Jung
Sogang University
E
Eun Tae Kim
Sogang University
S
Seo Yeon Kim
Sogang University
Joo Ho Lee
Joo Ho Lee
Sogang University
B
Bumsoo Kim
Chung-Ang University
Buru Chang
Buru Chang
Korea University
Natural Language ProcessingMultimodal Machine Learning Data Mining