π€ AI Summary
This work addresses the challenge of unreliable visual recognition in scenarios characterized by high inter-class similarity, significant scale variation, and limited computational resources. To this end, we propose the DS-MoE framework, which integrates a distilled large language model with a sparse Mixture-of-Experts (MoE) architecture. Our approach employs a text-guided dynamic routing mechanism to enable semantics-driven expert activation and incorporates a lightweight MobileSAM encoder for multi-scale defect awareness, thereby achieving precise alignment between textβvisual semantics and defect patterns. Evaluated on the BBMP, aluminum foil, and PCB datasets, DS-MoE outperforms YOLOv8 and YOLOX by 13.9, 1.4, and 2.0 percentage points in mAP@0.5:0.95, respectively, while also delivering notable improvements in both precision and recall.
π Abstract
High inter-class similarity, extreme scale variation, and limited computational budgets hinder reliable visual recognition across diverse real-world data. Existing vision-centric and cross-modal approaches often rely on rigid fusion mechanisms and heavy annotation pipelines, leading to sub-optimal generalization. We propose the Distilled Large Language Model (LLM)-Driven Sparse Mixture-of-Experts (DS-MoE) framework, which integrates text-guided dynamic routing and lightweight multi-scale comprehension. The DS-MoE framework dynamically aligns textual semantics with defect-specific visual patterns through a sparse MoE architecture, where task-relevant experts are adaptively activated based on semantic relevance, resolving inter-class ambiguity. A lightweight MobileSAM encoder enables real-time inference while preserving multi-scale defect details. Extensive experiments on PCB, aluminum foil, and mold defect datasets demonstrate that our framework achieves superior performance compared to existing pure vision models. \textbf{DS-MoE} surpasses YOLOv8/YOLOX with gains of +13.9, +1.4, and +2.0 pp mAP@ 0.5:0.95 on BBMP, aluminum, and PCB, respectively, while also improving precision and recall.