Ego: Embedding-Guided Personalization of Vision-Language Models

πŸ“… 2026-03-10
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing vision-language models often rely on additional training or external modules for personalization, which limits their generalization, scalability, and deployment efficiency. This work proposes a lightweight, fine-tuning-free approach that leverages the model’s intrinsic attention mechanisms to automatically extract visual tokens representing target concepts as β€œconcept memories.” During inference, these memories enable efficient personalized responses through embedding guidance. The method unifies support for single-concept, multi-concept, and video-based scenarios, consistently outperforming current state-of-the-art approaches across diverse settings. Notably, it achieves high performance with minimal computational overhead, demonstrating strong generality and practical utility.

Technology Category

Application Category

πŸ“ Abstract
AI assistants that support humans in daily life are becoming increasingly feasible, driven by the rapid advancements in multimodal language models. A key challenge lies in overcoming the generic nature of these models to deliver personalized experiences. Existing approaches to personalizing large vision language models often rely on additional training stages, which limit generality and scalability, or on engineered pipelines with external pre-trained modules, which hinder deployment efficiency. In this work, we propose an efficient personalization method that leverages the model's inherent ability to capture personalized concepts. Specifically, we extract visual tokens that predominantly represent the target concept by utilizing the model's internal attention mechanisms. These tokens serve as a memory of that specific concept, enabling the model to recall and describe it when it appears in test images. We conduct a comprehensive and unified evaluation of our approach and SOTA methods across various personalization settings including single-concept, multi-concept, and video personalization, demonstrating strong performance gains with minimal personalization overhead.
Problem

Research questions and friction points this paper is trying to address.

personalization
vision-language models
embedding
multimodal AI
concept representation
Innovation

Methods, ideas, or system contributions that make the work stand out.

personalization
vision-language models
attention mechanisms
visual tokens
embedding-guided
πŸ”Ž Similar Papers
No similar papers found.