๐ค AI Summary
To address the urgent need for parameter-efficient large language models (LLMs) in resource-constrained settings for vertical tasks (e.g., medical question answering, sentiment analysis), this paper proposes LLM-Sieve, a task-aware structured pruning framework. Methodologically, it introduces: (1) a novel task-driven joint linear projection that approximates target-task output behavior; (2) layer-wise differentiated weight matrix pruning via genetic algorithm optimization; and (3) synergistic integration of LoRA fine-tuning and quantization, enabling cross-dataset generalization within the same task. Evaluated across multiple domain-specific benchmarks, LLM-Sieve achieves 20โ75% parameter compression with only 1โ5% accuracy degradation, significantly improving inference speed and deployment efficiency. The resulting models are smaller, faster, and more accurateโtailored specifically to downstream vertical applications.
๐ Abstract
As Large Language Models (LLMs) are increasingly being adopted for narrow tasks - such as medical question answering or sentiment analysis - and deployed in resource-constrained settings, a key question arises: how many parameters does a task actually need? In this work, we present LLM-Sieve, the first comprehensive framework for task-specific pruning of LLMs that achieves 20-75% parameter reduction with only 1-5% accuracy degradation across diverse domains. Unlike prior methods that apply uniform pruning or rely on low-rank approximations of weight matrices or inputs in isolation, LLM-Sieve (i) learns task-aware joint projections to better approximate output behavior, and (ii) employs a Genetic Algorithm to discover differentiated pruning levels for each matrix. LLM-Sieve is fully compatible with LoRA fine-tuning and quantization, and uniquely demonstrates strong generalization across datasets within the same task domain. Together, these results establish a practical and robust mechanism to generate smaller performant task-specific models.