LayerCake: Token-Aware Contrastive Decoding within Large Language Model Layers

📅 2025-07-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) frequently generate factual errors in knowledge-intensive tasks, undermining their reliability. To address this, we propose Hierarchical Contrastive Decoding (HCD), a training-free, inference-time method that jointly models token types (e.g., punctuation, conceptual tokens) and layer-wise attention dynamics in Transformers. By analyzing attention distributions, HCD identifies characteristic layer patterns for key tokens and designs a token-aware local contrastive mechanism. It further introduces controlled degeneration—generating contrastive outputs under constrained perturbations—to produce discriminative signals that selectively suppress misleading attention paths during decoding. Crucially, HCD requires no architectural modifications or parameter updates. Evaluated on FACTKG, FEVER, and other factuality benchmarks, it consistently improves factual accuracy across diverse LLMs—including LLaMA-2, Qwen, and Mixtral—while maintaining high computational efficiency, broad model compatibility, and plug-and-play deployment.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) excel at natural language understanding and generation but remain vulnerable to factual errors, limiting their reliability in knowledge-intensive tasks. While decoding-time strategies provide a promising efficient solution without training, existing methods typically treat token-level and layer-level signals in isolation, overlooking the joint dynamics between them. In this work, we introduce a token-aware, layer-localized contrastive decoding method that aligns specific token types with their most influential transformer layers to improve factual generation. Through empirical attention analysis, we identify two key patterns: punctuation tokens receive dominant attention in early layers, while conceptual tokens govern semantic reasoning in intermediate layers. By selectively suppressing attention to these token types at their respective depths, we achieve the induction of controlled factual degradation and derive contrastive signals to guide the final factual decoding. Our method requires no additional training or model modification, and experiments demonstrate that our method consistently improves factuality across multiple LLMs and various benchmarks.
Problem

Research questions and friction points this paper is trying to address.

LLMs vulnerable to factual errors in knowledge tasks
Existing methods ignore token-layer joint dynamics
Need token-aware layer-localized decoding for factual generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Token-aware layer-localized contrastive decoding method
Aligns token types with influential transformer layers
Selectively suppresses attention to specific token types