Chimera: Improving Generalist Model with Domain-Specific Experts

📅 2024-12-08
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
General-purpose multimodal large language models exhibit limited comprehension capabilities in specialized domains—such as charts, tables, mathematical expressions, and documents—primarily due to insufficient domain-specific prior knowledge and misalignment between general and expert representations. To address this, we propose a lightweight domain-enhancement framework featuring a progressive feature injection strategy and a General–Specialized Collaborative Masking (GSCM) mechanism—the first approach enabling low-interference, high-compatibility co-optimization of expert models with general visual encoders. By integrating multimodal feature alignment, progressive knowledge distillation, and mask-driven collaborative training, our method seamlessly incorporates cross-domain expert models without retraining the backbone architecture. Evaluated on multimodal reasoning and visual content extraction tasks across four specialized domains, the framework achieves state-of-the-art performance, delivering significant and consistent improvements over existing methods.

Technology Category

Application Category

📝 Abstract
Recent advancements in Large Multi-modal Models (LMMs) underscore the importance of scaling by increasing image-text paired data, achieving impressive performance on general tasks. Despite their effectiveness in broad applications, generalist models are primarily trained on web-scale datasets dominated by natural images, resulting in the sacrifice of specialized capabilities for domain-specific tasks that require extensive domain prior knowledge. Moreover, directly integrating expert models tailored for specific domains is challenging due to the representational gap and imbalanced optimization between the generalist model and experts. To address these challenges, we introduce Chimera, a scalable and low-cost multi-modal pipeline designed to boost the ability of existing LMMs with domain-specific experts. Specifically, we design a progressive training strategy to integrate features from expert models into the input of a generalist LMM. To address the imbalanced optimization caused by the well-aligned general visual encoder, we introduce a novel Generalist-Specialist Collaboration Masking (GSCM) mechanism. This results in a versatile model that excels across the chart, table, math, and document domains, achieving state-of-the-art performance on multi-modal reasoning and visual content extraction tasks, both of which are challenging tasks for assessing existing LMMs.
Problem

Research questions and friction points this paper is trying to address.

Multimodal Models
Domain-specific Knowledge
Multimodal Reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Chimera System
Multi-modal Model Enhancement
Expertise Integration
🔎 Similar Papers
No similar papers found.