🤖 AI Summary
This work proposes Wallaroo, a unified multimodal model grounded in standard autoregressive next-token prediction, which for the first time enables simultaneous support for multimodal understanding, image generation, and image editing within a purely autoregressive framework. By decoupling the visual encoding pathway and employing a four-stage training strategy, Wallaroo effectively handles multi-resolution image inputs and outputs while maintaining compatibility with both Chinese and English languages. The model achieves competitive or state-of-the-art performance across multiple benchmarks compared to existing unified approaches, demonstrating the strong potential and viability of the next-token prediction paradigm for unified multimodal modeling.
📝 Abstract
In this work, we introduce Wallaroo, a simple autoregressive baseline that leverages next-token prediction to unify multi-modal understanding, image generation, and editing at the same time. Moreover, Wallaroo supports multi-resolution image input and output, as well as bilingual support for both Chinese and English. We decouple the visual encoding into separate pathways and apply a four-stage training strategy to reshape the model's capabilities. Experiments are conducted on various benchmarks where Wallaroo produces competitive performance or exceeds other unified models, suggesting the great potential of autoregressive models in unifying multi-modality understanding and generation. Our code is available at https://github.com/JiePKU/Wallaroo.