You Had One Job: Per-Task Quantization Using LLMs'Hidden Representations

πŸ“… 2025-11-09
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Large language models (LLMs) incur excessive memory footprint and inference latency in lightweight tasks due to structural redundancy. Method: This paper proposes Task-Aware Quantization (TAQ), a post-training quantization framework that departs from conventional task-agnostic approaches by constructing layer-wise sensitivity profiles using task-relevant hidden representations. TAQ adaptively assigns bitwidth per layer based on activation statistics over a small calibration set; TAQO further refines this via lightweight layer sensitivity testing to directly estimate accuracy impactβ€”both methods require no fine-tuning. Contribution/Results: Evaluated on Phi-4, Llama-3.1, and Qwen3, TAQ outperforms baselines including AWQ, achieving 42.33 EM / 50.81 F1 on Phi-4 with <1% accuracy degradation and lower average bitwidth, thus balancing efficiency and task-specific adaptation.

Technology Category

Application Category

πŸ“ Abstract
Large Language Models (LLMs) excel across diverse tasks, yet many applications require only limited capabilities, making large variants inefficient in memory and latency. Existing approaches often combine distillation and quantization, but most post-training quantization (PTQ) methods are task-agnostic, ignoring how task-specific signals are distributed across layers. In this work, we propose to use hidden representations that encode task-salient signals as a guideline for quantization. In order to fully utilize our innovative idea, this paper compares two new task-aware PTQ methods: Task-Aware Quantization (TAQ), which allocates bitwidths using task-conditioned statistics from hidden activations, and TAQO, which allocates precision based on direct layer sensitivity tests. From a small calibration set, these approaches identify task-relevant layers, preserving their precision while aggressively quantizing the rest. This yields stable task sensitivity profiles and efficient task-specialized models. Across models, TAQ and TAQO outperform the baselines; TAQ leads on Phi-4, while TAQO leads on Llama-3.1, Qwen3, and Qwen2.5. For instances, on Phi-4 it achieves 42.33 EM / 50.81 F1, far surpassing Activation-aware Weight Quantization (AWQ) (2.25 / 7.07), while remaining within<1.0% of the original accuracy at lower average precision.
Problem

Research questions and friction points this paper is trying to address.

Optimizing quantization for specific tasks using hidden representations
Allocating bitwidths based on task-conditioned activation statistics
Aggressively quantizing irrelevant layers while preserving key precision
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses hidden representations as quantization guideline
Allocates bitwidths using task-conditioned activation statistics
Preserves precision in task-relevant layers aggressively quantizes others
πŸ”Ž Similar Papers
No similar papers found.
Amit Levi
Amit Levi
University of Haifa
Theoretical Computer ScienceAlgorithmsMachine learning
R
Raz Lapid
DeepKeep.ai, Tel Aviv, Israel
R
Rom Himelstein
Technion – Israel Institute of Technology, Haifa, Israel
Y
Yaniv Nemcovsky
Technion – Israel Institute of Technology, Haifa, Israel
R
Ravid Shwartz Ziv
Center for Data Science, New York University, New York, USA
Avi Mendelson
Avi Mendelson
Electrical Engineering and Computer Science, Technion,
Computer systems