UniPCB: A Unified Vision-Language Benchmark for Open-Ended PCB Quality Inspection

📅 2026-01-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language models exhibit limited performance on complex printed circuit board (PCB) quality inspection tasks and lack a unified, high-quality evaluation benchmark. To address this gap, this work introduces UniPCB, the first multimodal vision-language benchmark tailored for open-ended PCB inspection, and proposes PCB-GPT—a model trained via a progressive curriculum learning strategy that mimics the learning process of human experts. By integrating multimodal large language models, vision-language alignment, and cross-source data standardization, the proposed approach significantly outperforms the strongest existing baselines in fine-grained defect localization and analysis, achieving more than a twofold improvement in localization accuracy.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) show promise for general industrial quality inspection, but fall short in complex scenarios, such as Printed Circuit Board (PCB) inspection. PCB inspection poses unique challenges due to densely packed components, complex wiring structures, and subtle defect patterns that require specialized domain expertise. However, a high-quality, unified vision-language benchmark for quantitatively evaluating MLLMs across PCB inspection tasks remains absent, stemming not only from limited data availability but also from fragmented datasets and inconsistent standardization. To fill this gap, we propose UniPCB, the first unified vision-language benchmark for open-ended PCB quality inspection. UniPCB is built via a systematic pipeline that curates and standardizes data from disparate sources across three annotated scenarios. Furthermore, we introduce PCB-GPT, an MLLM trained on a new instruction dataset generated by this pipeline, utilizing a novel progressive curriculum that mimics the learning process of human experts. Evaluations on the UniPCB benchmark show that while existing MLLMs falter on domain-specific tasks, PCB-GPT establishes a new baseline. Notably, it more than doubles the performance on fine-grained defect localization compared to the strongest competitors, with significant advantages in localization and analysis. We will release the instruction data, benchmark, and model to facilitate future research.
Problem

Research questions and friction points this paper is trying to address.

PCB inspection
vision-language benchmark
multimodal large language models
quality inspection
defect localization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified Vision-Language Benchmark
Progressive Curriculum Learning
Multimodal Large Language Model
PCB Quality Inspection
Fine-grained Defect Localization
🔎 Similar Papers
No similar papers found.
F
Fuxiang Sun
Shenzhen Polytechnic University
Xi Jiang
Xi Jiang
South University of Science and Technology
Computer VisionDeep Learning
J
Jiansheng Wu
University of Science and Technology Liaoning
H
Haigang Zhang
Shenzhen Polytechnic University
Feng Zheng
Feng Zheng
Southern University of Science and Technology; Spatialtemporal AI
Embodied IntelligenceSpatialtemporal AIComputer Vision
J
Jinfeng Yang
Shenzhen Polytechnic University