🤖 AI Summary
Current vision-language models (VLMs) are primarily aligned with generic safety and factual correctness, overlooking users’ personalized needs arising from diverse social roles and cognitive backgrounds—leading to response misalignment in real-world applications. To address this, we propose PCogAlign, the first framework that integrates sociologically grounded Role-Set–based role modeling with action-oriented evaluation for aligning VLMs with context-aware, personalized cognition. We introduce PCogAlignBench—the first benchmark covering 20 socially defined roles and comprising 18K samples—and design a cognition-aware, action-driven reward modeling approach. Alignment is achieved via targeted VLM fine-tuning and validated through human-in-the-loop evaluation. Experiments demonstrate that PCogAlign significantly improves response consistency and practical utility across diverse user profiles. The code and benchmark are publicly released.
📝 Abstract
Vision-language models (VLMs) aligned with general human objectives, such as being harmless and hallucination-free, have become valuable assistants of humans in managing visual tasks. However, people with diversified backgrounds have different cognition even in the same situation. Consequently, they may have personalized expectations for VLM assistants. This highlights the urgent need to align VLM assistants with personalized situated cognition for real-world assistance. To study this problem, we first simplify it by characterizing individuals based on the sociological concept of Role-Set. Then, we propose to evaluate the individuals' actions to examine whether the personalized alignment is achieved. Further, we construct a benchmark named PCogAlignBench, which includes 18k instances and 20 individuals with different Role-Sets. Finally, we present a framework called PCogAlign, which constructs a cognition-aware and action-based reward model for personalized alignment. Experimental results and human evaluations demonstrate the reliability of the PCogAlignBench and the effectiveness of our proposed PCogAlign. We will open-source the constructed benchmark and code at https://github.com/NLPGM/PCogAlign.