FreeAct: Freeing Activations for LLM Quantization

📅 2026-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing orthogonal transformation-based quantization methods for large language models (LLMs), which rely on static one-to-one constraints and struggle to accommodate the dynamic activation distribution differences across tokens in multimodal or diffusion models. To overcome this, we propose FreeAct, a novel framework that leverages the rank-deficient nature of activations to construct a solution space beyond inverse matrices, thereby decoupling activation and weight transformations. FreeAct enables dynamic, token-aware orthogonal quantization by assigning dedicated activation transformation matrices to different token types while maintaining a unified weight transformation. Experiments on multimodal and diffusion LLMs demonstrate that FreeAct significantly outperforms existing approaches, achieving performance gains of up to 5.3%.

Technology Category

Application Category

📝 Abstract
Quantization is pivotal for mitigating the significant memory and computational overhead of Large Language Models (LLMs). While emerging transformation-based methods have successfully enhanced quantization by projecting feature spaces onto smoother manifolds using orthogonal matrices, they typically enforce a rigid one-to-one transformation constraint. This static approach fails to account for the dynamic patterns inherent in input activations, particularly within diffusion LLMs (dLLMs) and Multimodal LLMs (MLLMs), where varying token types exhibit distinct distributions. To advance this, we propose FreeAct, a novel quantization framework that relaxes the static one-to-one constraint to accommodate dynamic activation disparities. Theoretically, we leverage the rank-deficient nature of activations to derive a solution space that extends beyond simple inverse matrices, enabling the decoupling of activation transformations from weights. Methodologically, FreeAct identifies token-specific dynamics (i.e., vision v.s. text, or masked tokens) and allocates distinct transformation matrices to the activation side, while maintaining a unified, static transformation for the weights. Extensive experiments across dLLMs and MLLMs demonstrate that FreeAct significantly outperforms baselines, up to 5.3% performance improvement, with in-depth analyses. Our code will be publicly released.
Problem

Research questions and friction points this paper is trying to address.

LLM quantization
activation dynamics
transformation constraint
multimodal LLMs
diffusion LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

activation-aware quantization
dynamic transformation
rank-deficient activations
decoupled quantization
multimodal LLMs
🔎 Similar Papers