๐ค AI Summary
This work addresses the limitations of existing approaches in multimodal news classification, which struggle to effectively model complex interactions between text and images and lack interpretability and integration of external knowledge. The paper proposes the first three-stage framework that synergistically combines modular multi-agent collaboration with retrieval-augmented reasoning. The framework integrates multimodal perception, retrieval-augmented reasoning, and gated fusion scoring, further enhanced by a reinforcement learningโdriven iterative optimization mechanism. Evaluated on a newly constructed large-scale multimodal news dataset, the proposed method substantially outperforms strong baseline models, achieving significant improvements in both classification accuracy and interpretability.
๐ Abstract
With the growing prevalence of multimodal news content, effective news topic classification demands models capable of jointly understanding and reasoning over heterogeneous data such as text and images. Existing methods often process modalities independently or employ simplistic fusion strategies, limiting their ability to capture complex cross-modal interactions and leverage external knowledge. To overcome these limitations, we propose MultiPress, a novel three-stage multi-agent framework for multimodal news classification. MultiPress integrates specialized agents for multimodal perception, retrieval-augmented reasoning, and gated fusion scoring, followed by a reward-driven iterative optimization mechanism. We validate MultiPress on a newly constructed large-scale multimodal news dataset, demonstrating significant improvements over strong baselines and highlighting the effectiveness of modular multi-agent collaboration and retrieval-augmented reasoning in enhancing classification accuracy and interpretability.