🤖 AI Summary
To address the high computational cost of self-attention, excessive KV cache memory pressure, and difficulty in optimizing time-to-first-token (TTFT) when deploying long-sequence large language models (LLMs) on edge devices, this paper proposes EdgeInfinite—a lightweight hardware-cooperative optimization framework. Methodologically, it introduces: (1) a segmented supervised fine-tuning (SFT) strategy that enhances instruction-following capability with only 0.1% parameter updates; (2) a fixed-shape computation graph design tailored for neural processing units (NPUs) and enabling fine-grained quantization; and (3) a scenario-aware KV cache management mechanism. Evaluated on long-context benchmarks and real-world mobile tasks, EdgeInfinite achieves low-latency, high-accuracy on-device inference without model retraining or specialized infrastructure. It significantly reduces TTFT and memory footprint while preserving model accuracy—outperforming existing approaches in both efficiency and practical deployability.
📝 Abstract
Deploying Transformer-based large language models (LLMs) on resource-constrained edge devices for long-sequence tasks remains challenging due to the quadratic time complexity of self-attention and growing Key-Value (KV) cache demands. While existing KV cache optimizations improve memory efficiency, they often fail to reduce time to first token (TTFT) and may degrade performance through token pruning. Alternative sequence modeling architectures address some of these limitations, but typically require full retraining and lack infrastructure support. EdgeInfinite offers an efficient solution by fine-tuning only a small subset of parameters, maintaining quality while reducing both computational and memory costs, including improved TTFT. However, its instruction-following ability is limited, and it lacks mobile-specific optimizations. To address these issues, we propose EdgeInfinite-Instruct, which introduces a Segmented Supervised Fine-Tuning (S-SFT) strategy tailored to long-sequence tasks such as summarization and question answering. We further optimized EdgeInfinite-Instruct for efficient deployment on edge NPUs by employing fine-grained post-training quantization (PTQ) to reduce computational demands while maintaining accuracy, and by implementing a fixed-shape computation graph that balances memory usage and on-device efficiency through scenario-specific customization of input token and cache sizes. Experiments on long-context benchmarks and real-world mobile tasks show that our approach improves domain-specific performance while maintaining efficiency on NPU-accelerated edge devices.