Aligning VLM Assistants with Personalized Situated Cognition

📅 2025-06-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current vision-language models (VLMs) are primarily aligned with generic safety and factual correctness, overlooking users’ personalized needs arising from diverse social roles and cognitive backgrounds—leading to response misalignment in real-world applications. To address this, we propose PCogAlign, the first framework that integrates sociologically grounded Role-Set–based role modeling with action-oriented evaluation for aligning VLMs with context-aware, personalized cognition. We introduce PCogAlignBench—the first benchmark covering 20 socially defined roles and comprising 18K samples—and design a cognition-aware, action-driven reward modeling approach. Alignment is achieved via targeted VLM fine-tuning and validated through human-in-the-loop evaluation. Experiments demonstrate that PCogAlign significantly improves response consistency and practical utility across diverse user profiles. The code and benchmark are publicly released.

Technology Category

Application Category

📝 Abstract
Vision-language models (VLMs) aligned with general human objectives, such as being harmless and hallucination-free, have become valuable assistants of humans in managing visual tasks. However, people with diversified backgrounds have different cognition even in the same situation. Consequently, they may have personalized expectations for VLM assistants. This highlights the urgent need to align VLM assistants with personalized situated cognition for real-world assistance. To study this problem, we first simplify it by characterizing individuals based on the sociological concept of Role-Set. Then, we propose to evaluate the individuals' actions to examine whether the personalized alignment is achieved. Further, we construct a benchmark named PCogAlignBench, which includes 18k instances and 20 individuals with different Role-Sets. Finally, we present a framework called PCogAlign, which constructs a cognition-aware and action-based reward model for personalized alignment. Experimental results and human evaluations demonstrate the reliability of the PCogAlignBench and the effectiveness of our proposed PCogAlign. We will open-source the constructed benchmark and code at https://github.com/NLPGM/PCogAlign.
Problem

Research questions and friction points this paper is trying to address.

Aligning VLMs with personalized situated cognition
Addressing diverse individual expectations for VLM assistants
Evaluating personalized alignment through action-based metrics
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Role-Set to characterize individuals
Constructs PCogAlignBench with 18k instances
Develops cognition-aware action-based reward model
🔎 Similar Papers
No similar papers found.
Y
Yongqi Li
School of Computer Science, Wuhan University, China
Shen Zhou
Shen Zhou
Wuhan University
Xiaohu Li
Xiaohu Li
Stevens Institute of Technology
Applied probabilityMathematical statisticsActuarial scienceStochastic ordersReliability
X
Xin Miao
School of Computer Science, Wuhan University, China
J
Jintao Wen
School of Computer Science, Wuhan University, China
Mayi Xu
Mayi Xu
Wuhan University
Natural Language Processing
J
Jianhao Chen
School of Computer Science, Wuhan University, China; Zhongguancun Academy, Beijing, China
B
Birong Pan
School of Computer Science, Wuhan University, China
H
Hankun Kang
School of Computer Science, Wuhan University, China
Y
Yuanyuan Zhu
School of Computer Science, Wuhan University, China
M
Ming Zhong
School of Computer Science, Wuhan University, China; Zhongguancun Academy, Beijing, China
Tieyun Qian
Tieyun Qian
Wuhan University
natural language processingweb data mining