🤖 AI Summary
This work addresses the challenge of modeling long-range historical dependencies in multi-turn conversational image generation, where existing approaches struggle with non-Markovian behaviors such as backtracking, undo operations, and cross-turn entity references. To this end, we propose a history-conditioned, non-Markovian framework that integrates a token-level caching mechanism with a name-driven personalization strategy to mitigate identity drift and enable high-fidelity reconstruction and editing. Our approach combines a reconstruction-driven DiT decoder, a multi-stage fine-tuning curriculum, and a history-aware multimodal large language model, significantly enhancing multi-turn consistency and instruction-following capabilities while preserving strong single-turn generation and editing performance.
📝 Abstract
Conversational image generation requires a model to follow user instructions across multiple rounds of interaction, grounded in interleaved text and images that accumulate as chat history. While recent multimodal large language models (MLLMs) can generate and edit images, most existing multi-turn benchmarks and training recipes are effectively Markov: the next output depends primarily on the most recent image, enabling shortcut solutions that ignore long-range history. In this work we formalize and target the more challenging non-Markov setting, where a user may refer back to earlier states, undo changes, or reference entities introduced several rounds ago. We present (i) non-Markov multi-round data construction strategies, including rollback-style editing that forces retrieval of earlier visual states and name-based multi-round personalization that binds names to appearances across rounds; (ii) a history-conditioned training and inference framework with token-level caching to prevent multi-round identity drift; and (iii) enabling improvements for high-fidelity image reconstruction and editable personalization, including a reconstruction-based DiT detokenizer and a multi-stage fine-tuning curriculum. We demonstrate that explicitly training for non-Markov interactions yields substantial improvements in multi-round consistency and instruction compliance, while maintaining strong single-round editing and personalization.