Differences That Matter: Auditing Models for Capability Gap Discovery and Rectification

📅 2025-12-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Conventional evaluation of multimodal large language models (MLLMs) lacks interpretability and fails to uncover fine-grained capability disparities across models. Method: We propose AuditDM, the first model-divergence-driven active auditing framework: it trains multimodal models as “auditors” via reinforcement learning to autonomously generate challenging questions and counterfactual images—enabling unsupervised failure localization and repair-data synthesis without human annotations. The method integrates multimodal adversarial generation, unsupervised failure mining, and counterfactual reasoning. Contribution/Results: AuditDM discovers over 20 novel failure patterns on state-of-the-art models including Gemma-3 and PaliGemma-2. Audit-driven fine-tuning yields consistent improvements across 16 benchmarks, enabling a 3B-parameter model to outperform a 28B-parameter baseline—demonstrating, for the first time, that lightweight models can achieve capability leaps through principled auditing and optimization.

Technology Category

Application Category

📝 Abstract
Conventional evaluation methods for multimodal LLMs (MLLMs) lack interpretability and are often insufficient to fully disclose significant capability gaps across models. To address this, we introduce AuditDM, an automated framework that actively discovers and rectifies MLLM failure modes by auditing their divergence. AuditDM fine-tunes an MLLM as an auditor via reinforcement learning to generate challenging questions and counterfactual images that maximize disagreement among target models. Once trained, the auditor uncovers diverse, interpretable exemplars that reveal model weaknesses and serve as annotation-free data for rectification. When applied to SoTA models like Gemma-3 and PaliGemma-2, AuditDM discovers more than 20 distinct failure types. Fine-tuning on these discoveries consistently improves all models across 16 benchmarks, and enables a 3B model to surpass its 28B counterpart. Our results suggest that as data scaling hits diminishing returns, targeted model auditing offers an effective path to model diagnosis and improvement.
Problem

Research questions and friction points this paper is trying to address.

Auditing multimodal LLMs to discover interpretable capability gaps
Automatically generating challenging questions and images revealing model weaknesses
Using discovered failures as annotation-free data for model rectification
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated framework audits multimodal LLM divergence
Reinforcement learning fine-tunes auditor for challenging questions
Generates counterfactual images to reveal model weaknesses
🔎 Similar Papers
No similar papers found.