Decomposition-Enhanced Training for Post-Hoc Attributions In Language Models

📅 2025-10-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing post-hoc attribution methods for long-document question answering suffer from unreliable attributions in multi-hop, abstract, and semi-extractive scenarios. To address this, we reformulate attribution as a decomposable reasoning task. We propose DecompTune, a novel training paradigm that leverages strong LLMs to automatically generate answer-attribution pairs augmented with intermediate reasoning chains. Building upon this, we design a two-stage fine-tuning framework—supervised fine-tuning (SFT) followed by Generalized Reinforcement Learning from Preference Optimization (GRPO)—to jointly optimize attribution fidelity and reasoning quality. Implemented on Qwen-2.5, our approach achieves the first fine-grained, interpretable attribution generation. It significantly improves attribution accuracy and faithfulness in complex QA settings, outperforming prior attribution methods across multiple benchmarks and matching or exceeding state-of-the-art LLMs.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly used for long-document question answering, where reliable attribution to sources is critical for trust. Existing post-hoc attribution methods work well for extractive QA but struggle in multi-hop, abstractive, and semi-extractive settings, where answers synthesize information across passages. To address these challenges, we argue that post-hoc attribution can be reframed as a reasoning problem, where answers are decomposed into constituent units, each tied to specific context. We first show that prompting models to generate such decompositions alongside attributions improves performance. Building on this, we introduce DecompTune, a post-training method that teaches models to produce answer decompositions as intermediate reasoning steps. We curate a diverse dataset of complex QA tasks, annotated with decompositions by a strong LLM, and post-train Qwen-2.5 (7B and 14B) using a two-stage SFT + GRPO pipeline with task-specific curated rewards. Across extensive experiments and ablations, DecompTune substantially improves attribution quality, outperforming prior methods and matching or exceeding state-of-the-art frontier models.
Problem

Research questions and friction points this paper is trying to address.

Improving attribution reliability in long-document question answering
Addressing multi-hop abstractive QA where answers synthesize information
Teaching models to produce answer decompositions for better attributions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reframes attribution as reasoning with answer decompositions
Introduces DecompTune post-training with decomposition generation
Uses two-stage SFT and GRPO with curated rewards
🔎 Similar Papers
No similar papers found.