🤖 AI Summary
Text-to-image generation often suffers from impure subject rendering and persistent interference elements, hindering high-fidelity output in creative domains such as textile pattern design and meme generation.
Method: We propose a zero-shot subject purification framework featuring (i) an entropy-driven multi-step cross-attention feature weighting and fusion mechanism, integrating FLUX-based entropy-guided feature aggregation with cross-timestep cross-attention optimization; and (ii) an LLM-powered agent that automatically transforms colloquial inputs into fine-grained, subject-centric prompts via semantic grounding.
Contribution/Results: Quantitative evaluation demonstrates a 37% improvement in subject completeness and a 62% reduction in background noise compared to state-of-the-art baselines. The method enables end-to-end, high-quality, subject-preserved image synthesis without requiring task-specific training or manual intervention, establishing new performance benchmarks for subject-focused generative modeling.
📝 Abstract
Generative models are widely used in visual content creation. However, current text-to-image models often face challenges in practical applications-such as textile pattern design and meme generation-due to the presence of unwanted elements that are difficult to separate with existing methods. Meanwhile, subject-reference generation has emerged as a key research trend, highlighting the need for techniques that can produce clean, high-quality subject images while effectively removing extraneous components. To address this challenge, we introduce a framework for reliable subject-centric image generation. In this work, we propose an entropy-based feature-weighted fusion method to merge the informative cross-attention features obtained from each sampling step of the pretrained text-to-image model FLUX, enabling a precise mask prediction and subject-centric generation. Additionally, we have developed an agent framework based on Large Language Models (LLMs) that translates users' casual inputs into more descriptive prompts, leading to highly detailed image generation. Simultaneously, the agents extract primary elements of prompts to guide the entropy-based feature fusion, ensuring focused primary element generation without extraneous components. Experimental results and user studies demonstrate our methods generates high-quality subject-centric images, outperform existing methods or other possible pipelines, highlighting the effectiveness of our approach.