🤖 AI Summary
Large language models (LLMs) in the legal domain frequently generate unfaithful, unsupported hallucinations, severely limiting their practical deployment. To address insufficient context integration in retrieval-augmented generation (RAG) and the absence of explicit faithfulness constraints in existing decoding strategies, this paper proposes Confidence-guided Replication-enhanced Decoding (CRD). CRD dynamically fuses the vocabulary generation distribution with a context-aware replication distribution, adaptively modulating replication strength based on the model’s output confidence. This ensures semantic coherence while substantially improving evidential grounding and traceability of generated outputs. Evaluated across five legal text generation benchmarks, CRD consistently outperforms state-of-the-art methods, demonstrating superior factual consistency and contextual faithfulness—particularly in long-text generation scenarios.
📝 Abstract
Due to their ability to process long and complex contexts, LLMs can offer key benefits to the Legal domain, but their adoption has been hindered by their tendency to generate unfaithful, ungrounded, or hallucinatory outputs. While Retrieval-Augmented Generation offers a promising solution by grounding generations in external knowledge, it offers no guarantee that the provided context will be effectively integrated. To address this, context-aware decoding strategies have been proposed to amplify the influence of relevant context, but they usually do not explicitly enforce faithfulness to the context. In this work, we introduce Confidence-guided Copy-based Decoding for Legal Text Generation (CoCoLex)-a decoding strategy that dynamically interpolates the model produced vocabulary distribution with a distribution derived based on copying from the context. CoCoLex encourages direct copying based on the model's confidence, ensuring greater fidelity to the source. Experimental results on five legal benchmarks demonstrate that CoCoLex outperforms existing context-aware decoding methods, particularly in long-form generation tasks.