POV Learning: Individual Alignment of Multimodal Models using Human Perception

📅 2024-05-07
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of aligning multimodal models with individual subjective perception, moving beyond conventional population-level alignment paradigms toward personalized AI alignment. Methodologically, it introduces the first framework that explicitly leverages dynamic individual perceptual signals—such as eye-tracking trajectories—as supervision for alignment, proposing a Perception-Guided Multimodal Transformer (PG-MT) to enable perception-informed cross-modal entailment reasoning. Additionally, the authors release the first multimodal dataset explicitly designed for modeling subjective point-of-view (POV) at the individual level. Experiments demonstrate significant improvements in model performance on individual subjective evaluation tasks, empirically validating that fine-grained perceptual signals effectively guide models to conform to users’ idiosyncratic judgments and value preferences. This work establishes a scalable methodological framework and provides empirical grounding for individualized alignment of multimodal AI systems.

Technology Category

Application Category

📝 Abstract
Aligning machine learning systems with human expectations is mostly attempted by training with manually vetted human behavioral samples, typically explicit feedback. This is done on a population level since the context that is capturing the subjective Point-Of-View (POV) of a concrete person in a specific situational context is not retained in the data. However, we argue that alignment on an individual level can boost the subjective predictive performance for the individual user interacting with the system considerably. Since perception differs for each person, the same situation is observed differently. Consequently, the basis for decision making and the subsequent reasoning processes and observable reactions differ. We hypothesize that individual perception patterns can be used for improving the alignment on an individual level. We test this, by integrating perception information into machine learning systems and measuring their predictive performance wrt.~individual subjective assessments. For our empirical study, we collect a novel data set of multimodal stimuli and corresponding eye tracking sequences for the novel task of Perception-Guided Crossmodal Entailment and tackle it with our Perception-Guided Multimodal Transformer. Our findings suggest that exploiting individual perception signals for the machine learning of subjective human assessments provides a valuable cue for individual alignment. It does not only improve the overall predictive performance from the point-of-view of the individual user but might also contribute to steering AI systems towards every person's individual expectations and values.
Problem

Research questions and friction points this paper is trying to address.

Aligning AI with individual human perception patterns
Improving predictive performance using personal perception data
Enhancing AI alignment with subjective user expectations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Individual alignment using human perception patterns
Perception-Guided Multimodal Transformer for learning
Eye tracking data enhances subjective predictive performance
🔎 Similar Papers
No similar papers found.
S
Simon Werner
Trier University
K
Katharina Christ
Universität Innsbruck
L
Laura Bernardy
Trier University
M
Marion G. Müller
Trier University
Achim Rettinger
Achim Rettinger
Trier University
Machine LearningSemantic TechnologiesNatural Language ProcessingArtificial Intelligence