🤖 AI Summary
Current text-to-image models struggle to reliably generate images from long-range, compositional prompts. To address this, we propose a multi-expert collaborative framework that employs reflective reinforcement learning—formulated as a Markov decision process—to enable end-to-end dynamic task decomposition and expert model scheduling. The framework integrates pre-trained text-to-image, image-to-image, and vision-language models, with the latter serving as a structured critic that provides fine-grained, interpretable feedback. Our key contribution is the autonomous learning of an optimal expert invocation policy, effectively transcending the inherent limitations of individual models. Evaluated on both standard and custom benchmarks, our method achieves significant improvements over state-of-the-art approaches across three critical dimensions: prompt alignment, image fidelity, and aesthetic quality. Human evaluation further confirms substantial gains in user preference.
📝 Abstract
Recent advances in text-to-image generation have produced strong single-shot models, yet no individual system reliably executes the long, compositional prompts typical of creative workflows. We introduce Image-POSER, a reflective reinforcement learning framework that (i) orchestrates a diverse registry of pretrained text-to-image and image-to-image experts, (ii) handles long-form prompts end-to-end through dynamic task decomposition, and (iii) supervises alignment at each step via structured feedback from a vision-language model critic. By casting image synthesis and editing as a Markov Decision Process, we learn non-trivial expert pipelines that adaptively combine strengths across models. Experiments show that Image-POSER outperforms baselines, including frontier models, across industry-standard and custom benchmarks in alignment, fidelity, and aesthetics, and is consistently preferred in human evaluations. These results highlight that reinforcement learning can endow AI systems with the capacity to autonomously decompose, reorder, and combine visual models, moving towards general-purpose visual assistants.