APT: Improving Specialist LLM Performance with Weakness Case Acquisition and Iterative Preference Training

📅 2025-06-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the degradation of general-purpose capabilities in large language models (LLMs) caused by domain-specific fine-tuning, this paper proposes an error-sample-driven framework for weakness identification and iterative preference training. Unlike conventional approaches, it avoids requiring full domain datasets; instead, it self-generates erroneous examples, retrieves semantically similar samples, and integrates weakly supervised error detection, counterfactual weakness construction, and lightweight preference alignment training for targeted optimization. Its core contribution is the novel paradigm of “weakness-case acquisition + iterative preference training,” where errors serve as explicit signals to guide efficient, capability-preserving adaptation. Experiments on LLaMA-2 and Mistral-7B demonstrate zero degradation in general-domain performance while achieving statistically significant improvements over mainstream methods—including LoRA, QLoRA, and DPO—on downstream domain-specific tasks.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) often require domain-specific fine-tuning to address targeted tasks, which risks degrading their general capabilities. Maintaining a balance between domain-specific enhancements and general model utility is a key challenge. This paper proposes a novel approach named APT (Weakness Case Acquisition and Iterative Preference Training) to enhance domain-specific performance with self-generated dis-preferred weakness data (bad cases and similar cases). APT uniquely focuses on training the model using only those samples where errors occur, alongside a small, similar set of samples retrieved for this purpose. This targeted training minimizes interference with the model's existing knowledge base, effectively retaining generic capabilities. Experimental results on the LLama-2 and Mistral-V0.3 models across various benchmarks demonstrate that APT ensures no reduction in generic capacity and achieves superior performance on downstream tasks compared to various existing methods. This validates our method as an effective strategy for enhancing domain-specific capabilities without sacrificing the model's broader applicability.
Problem

Research questions and friction points this paper is trying to address.

Balancing domain-specific enhancements with general model utility
Improving specialist LLM performance using weakness case acquisition
Retaining generic capabilities while enhancing domain-specific tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses self-generated dis-preferred weakness data
Targets training on error-prone samples only
Retains generic capabilities via minimal interference
🔎 Similar Papers
No similar papers found.
Jun Rao
Jun Rao
Harbin Institute of Technology (Shenzhen)
LLMsEfficient Post-trainingKnowledge DistillationMultimodal
Z
Zepeng Lin
Institute of Computing and Intelligence, Harbin Institute of Technology, Shenzhen, China
X
Xuebo Liu
Institute of Computing and Intelligence, Harbin Institute of Technology, Shenzhen, China
Xiaopeng Ke
Xiaopeng Ke
Nanjing University
deep learningadversarial learningmetric learningtrustworthy ai
L
Lian Lian
Huawei Cloud Computing Technologies Co., Ltd.
D
Dong Jin
Huawei Cloud Computing Technologies Co., Ltd.
S
Shengjun Cheng
Huawei Cloud Computing Technologies Co., Ltd.
J
Jun Yu
School of Intelligence Science and Engineering, Harbin Institute of Technology, Shenzhen, China
M
Min Zhang
Institute of Computing and Intelligence, Harbin Institute of Technology, Shenzhen, China