UniCTokens: Boosting Personalized Understanding and Generation via Unified Concept Tokens

📅 2025-05-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing methods for personalized understanding and generation employ disjoint concept tokens, limiting their capacity for fine-grained, knowledge-driven generation under complex prompts (e.g., “bo wearing its hat”). To address this, we propose UnifyToken—a unified concept token framework that enables synergistic enhancement across visual-language understanding, generation, and knowledge-driven generation. UnifyToken introduces a three-stage progressive training paradigm: (i) understanding warm-up, (ii) generation guidance, and (iii) understanding feedback; integrates concept embedding optimization; and incorporates self-supervised knowledge distillation. We further introduce UnifyBench—the first comprehensive benchmark covering understanding, generation, and knowledge-driven generation tasks—to rigorously evaluate personalized multimodal reasoning. Extensive experiments demonstrate that UnifyToken achieves state-of-the-art performance on UnifyBench, particularly excelling in personalized knowledge-driven generation, outperforming prior approaches by a significant margin.

Technology Category

Application Category

📝 Abstract
Personalized models have demonstrated remarkable success in understanding and generating concepts provided by users. However, existing methods use separate concept tokens for understanding and generation, treating these tasks in isolation. This may result in limitations for generating images with complex prompts. For example, given the concept $langle bo angle$, generating"$langle bo angle$ wearing its hat"without additional textual descriptions of its hat. We call this kind of generation personalized knowledge-driven generation. To address the limitation, we present UniCTokens, a novel framework that effectively integrates personalized information into a unified vision language model (VLM) for understanding and generation. UniCTokens trains a set of unified concept tokens to leverage complementary semantics, boosting two personalized tasks. Moreover, we propose a progressive training strategy with three stages: understanding warm-up, bootstrapping generation from understanding, and deepening understanding from generation to enhance mutual benefits between both tasks. To quantitatively evaluate the unified VLM personalization, we present UnifyBench, the first benchmark for assessing concept understanding, concept generation, and knowledge-driven generation. Experimental results on UnifyBench indicate that UniCTokens shows competitive performance compared to leading methods in concept understanding, concept generation, and achieving state-of-the-art results in personalized knowledge-driven generation. Our research demonstrates that enhanced understanding improves generation, and the generation process can yield valuable insights into understanding. Our code and dataset will be released at: href{https://github.com/arctanxarc/UniCTokens}{https://github.com/arctanxarc/UniCTokens}.
Problem

Research questions and friction points this paper is trying to address.

Unifies concept tokens for understanding and generation tasks
Enhances personalized knowledge-driven image generation
Proposes a progressive training strategy for mutual task benefits
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified concept tokens for understanding and generation
Progressive training strategy with three stages
UnifyBench benchmark for comprehensive evaluation
🔎 Similar Papers
No similar papers found.