Self-Correction Distillation for Structured Data Question Answering

πŸ“… 2025-11-11
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Small language models (e.g., 8B-parameter models) often generate erroneous structured queries for table QA, knowledge graph QA, and temporal knowledge graph QA. To address this, we propose a self-correcting distillation framework. Our method introduces: (1) an Error Prompting Mechanism (EPM) that enables fine-grained error detection and feedback during inference; and (2) a two-stage knowledge distillation paradigm that systematically transfers both query generation and automatic error correction capabilities from large models (e.g., GPT-4) to small modelsβ€”first such effort in the literature. Experiments across five benchmarks spanning three structured data modalities demonstrate that our approach significantly outperforms existing distillation methods: small models achieve performance close to GPT-4, while the large model augmented with EPM attains state-of-the-art results on most tasks.

Technology Category

Application Category

πŸ“ Abstract
Structured data question answering (QA), including table QA, Knowledge Graph (KG) QA, and temporal KG QA, is a pivotal research area. Advances in large language models (LLMs) have driven significant progress in unified structural QA frameworks like TrustUQA. However, these frameworks face challenges when applied to small-scale LLMs since small-scale LLMs are prone to errors in generating structured queries. To improve the structured data QA ability of small-scale LLMs, we propose a self-correction distillation (SCD) method. In SCD, an error prompt mechanism (EPM) is designed to detect errors and provide customized error messages during inference, and a two-stage distillation strategy is designed to transfer large-scale LLMs'query-generation and error-correction capabilities to small-scale LLM. Experiments across 5 benchmarks with 3 structured data types demonstrate that our SCD achieves the best performance and superior generalization on small-scale LLM (8B) compared to other distillation methods, and closely approaches the performance of GPT4 on some datasets. Furthermore, large-scale LLMs equipped with EPM surpass the state-of-the-art results on most datasets.
Problem

Research questions and friction points this paper is trying to address.

Improving structured query generation accuracy for small-scale language models
Distilling error-correction capabilities from large to small language models
Enhancing structured data question answering performance across multiple benchmarks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Self-correction distillation method for small LLMs
Error prompt mechanism detects and corrects errors
Two-stage distillation transfers query and correction skills
πŸ”Ž Similar Papers
No similar papers found.
Yushan Zhu
Yushan Zhu
Zhejiang University, China
W
Wen Zhang
Zhejiang University, China
L
Long Jin
Zhejiang University, China
Mengshu Sun
Mengshu Sun
Beijing University of Technology
Deep LearningModel Compression and Acceleration
L
Ling Zhong
Ant Group, China
Z
Zhiqiang Liu
Zhejiang University, China
J
Juan Li
Zhejiang University, China
Lei Liang
Lei Liang
Ant Group
Knowledge GraphAI
C
Chong Long
JIUTIAN Research, Beijing, China
C
Chao Deng
JIUTIAN Research, Beijing, China
Junlan Feng
Junlan Feng
Chief Scientist at China Mobile Research
Natural LanguageMachine LearningSpeech ProcessingData Mining