CLAA: Cross-Layer Attention Aggregation for Accelerating LLM Prefill

📅 2026-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the computational bottleneck in the prefill stage of long-context large language model inference, where existing token importance–based acceleration methods suffer from unstable importance estimation across layers. We propose the first Answer-Informed Oracle to provide reliable ground-truth token importance per layer, revealing significant performance fluctuations of current heuristic strategies across different layers. Building on this insight, we design a cross-layer attention aggregation mechanism that integrates multi-layer information to optimize selective retention in the KV cache. Experiments demonstrate that our approach reduces Time-to-First-Token (TTFT) by up to 39%, closely approaching the theoretical upper bound established by the oracle.

Technology Category

Application Category

📝 Abstract
The prefill stage in long-context LLM inference remains a computational bottleneck. Recent token-ranking heuristics accelerate inference by selectively processing a subset of semantically relevant tokens. However, existing methods suffer from unstable token importance estimation, often varying between layers. Evaluating token-ranking quality independently from heuristic-specific architectures is challenging. To address this, we introduce an Answer-Informed Oracle, which defines ground-truth token importance by measuring attention from generated answers back to the prompt. This oracle reveals that existing heuristics exhibit high variance across layers: rankings can degrade sharply at specific layers, a failure mode invisible to end-to-end benchmarks. The diagnosis suggests a simple fix: aggregate scores across layers rather than relying on any single one. We implement this as Cross-Layer Attention Aggregation (CLAA), which closes the gap to the oracle upper bound and reduces Time-to-First-Token (TTFT) by up to 39\% compared to the Full KV Cache baseline.
Problem

Research questions and friction points this paper is trying to address.

prefill
token importance estimation
long-context LLM inference
attention aggregation
computational bottleneck
Innovation

Methods, ideas, or system contributions that make the work stand out.

Cross-Layer Attention Aggregation
Answer-Informed Oracle
Token Importance Estimation
LLM Prefill Acceleration
Long-Context Inference
🔎 Similar Papers
No similar papers found.