🤖 AI Summary
Current multimodal large language models (MLLMs) support only pairwise generation—e.g., text-to-image or text-to-audio—and cannot simultaneously generate arbitrary combinations of multiple modalities (e.g., text + image + audio + video). To address this limitation, we propose Arbitrary-to-Many Multimodal Generation (AMMG), a novel paradigm enabling flexible, joint output across any subset of modalities. Our method introduces a plug-and-play multimodal decoder, an efficient decoding controller, and a structured instruction template. We further construct the first Text-Formatted Multimodal (TMM) training dataset and a pseudo X-to-Xs multimodal synthetic dataset. Evaluated on diverse multimodal generation tasks, AMMG significantly improves cross-modal coherence and generation flexibility. It establishes a new benchmark, introduces scalable multimodal data resources, and provides a modular, extensible architecture—advancing the frontier of general-purpose multimodal generation.
📝 Abstract
Multimodal LLMs (MLLMs) have emerged as an extension of Large Language Models (LLMs), enabling the integration of various modalities. However, Any-to-Any MLLMs are limited to generating pairwise modalities 'Text + X' within a single response, such as Text + {Image or Audio or Video}. To address this limitation, we introduce Spider, a novel efficient Any-to-Many Modalities Generation (AMMG) framework, which can generate an arbitrary combination of modalities 'Text + Xs', such as Text + {Image and Audio and Video}. To achieve efficient AMMG, our Spider integrates three core components: a Base Model for basic X-to-X (i.e., Any-to-Any) modality processing, a novel Efficient Decoders-Controller for controlling multimodal Decoders to generate Xs (many-modal) contents, and an Any-to-Many Instruction Template designed for producing Xs signal prompts. To train Spider, we constructed a novel Text-formatted Many-Modal (TMM) dataset, which facilitates the learning of the X-to-Xs (i.e., Any-to-Many) capability necessary for AMMG. Ultimately, the well-trained Spider generates a pseudo X-to-Xs dataset, the first-ever X-to-Xs many-modal dataset, enhancing the potential for AMMG task in future research. Overall, this work not only pushes the boundary of multimodal interaction but also provides rich data support for advancing the field.