ASemConsist: Adaptive Semantic Feature Control for Training-Free Identity-Consistent Generation

πŸ“… 2025-12-29
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This paper addresses the challenge of simultaneously preserving character identity consistency and ensuring prompt alignment in cross-scene generation with text-to-image diffusion models. We propose a fine-tuning-free collaborative optimization framework. Our key contributions are: (1) a padding-based semantic reuse mechanism that enables containerized text embeddings and selective editing; (2) an adaptive fuzzy identity constraint that dynamically balances identity fidelity and generation diversity; and (3) a Consistency Quality Score (CQS) multi-objective evaluation framework that jointly quantifies identity consistency and text–image alignment. Evaluated on multiple benchmarks, our method achieves state-of-the-art performance: ID-Recall improves by 23.6%, and CQS increases by 18.4%, significantly alleviating the inherent trade-off between identity preservation and prompt adherence in diffusion-based generation.

Technology Category

Application Category

πŸ“ Abstract
Recent text-to-image diffusion models have significantly improved visual quality and text alignment. However, generating a sequence of images while preserving consistent character identity across diverse scene descriptions remains a challenging task. Existing methods often struggle with a trade-off between maintaining identity consistency and ensuring per-image prompt alignment. In this paper, we introduce a novel framework, ASemconsist, that addresses this challenge through selective text embedding modification, enabling explicit semantic control over character identity without sacrificing prompt alignment. Furthermore, based on our analysis of padding embeddings in FLUX, we propose a semantic control strategy that repurposes padding embeddings as semantic containers. Additionally, we introduce an adaptive feature-sharing strategy that automatically evaluates textual ambiguity and applies constraints only to the ambiguous identity prompt. Finally, we propose a unified evaluation protocol, the Consistency Quality Score (CQS), which integrates identity preservation and per-image text alignment into a single comprehensive metric, explicitly capturing performance imbalances between the two metrics. Our framework achieves state-of-the-art performance, effectively overcoming prior trade-offs. Project page: https://minjung-s.github.io/asemconsist
Problem

Research questions and friction points this paper is trying to address.

Generates consistent character identity across diverse scenes
Balances identity preservation with prompt alignment in images
Introduces adaptive semantic control without training requirements
Innovation

Methods, ideas, or system contributions that make the work stand out.

Selective text embedding modification for semantic control
Padding embeddings repurposed as semantic containers
Adaptive feature-sharing strategy for ambiguous prompts
πŸ”Ž Similar Papers
No similar papers found.