Efficient Reasoning on the Edge

๐Ÿ“… 2026-03-17
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the challenges of deploying large language models on edge devices, where high computational overhead, lengthy inference trajectories, and substantial key-value (KV) cache memory hinder practicality. To overcome these limitations, the authors propose an efficient inference framework that integrates lightweight LoRA adapters with supervised fine-tuning to instill reasoning capabilities, complemented by a reinforcement learningโ€“driven budget control mechanism to compress response length. Additionally, a dynamic adapter switching strategy activates reasoning modules only when needed, while KV cache sharing during prompt encoding reduces first-token latency. Experiments on Qwen2.5-7B demonstrate that the proposed approach significantly shortens response length and latency, enabling accurate and efficient on-device inference under stringent resource constraints.

Technology Category

Application Category

๐Ÿ“ Abstract
Large language models (LLMs) with chain-of-thought reasoning achieve state-of-the-art performance across complex problem-solving tasks, but their verbose reasoning traces and large context requirements make them impractical for edge deployment. These challenges include high token generation costs, large KV-cache footprints, and inefficiencies when distilling reasoning capabilities into smaller models for mobile devices. Existing approaches often rely on distilling reasoning traces from larger models into smaller models, which are verbose and stylistically redundant, undesirable for on-device inference. In this work, we propose a lightweight approach to enable reasoning in small LLMs using LoRA adapters combined with supervised fine-tuning. We further introduce budget forcing via reinforcement learning on these adapters, significantly reducing response length with minimal accuracy loss. To address memory-bound decoding, we exploit parallel test-time scaling, improving accuracy at minor latency increase. Finally, we present a dynamic adapter-switching mechanism that activates reasoning only when needed and a KV-cache sharing strategy during prompt encoding, reducing time-to-first-token for on-device inference. Experiments on Qwen2.5-7B demonstrate that our method achieves efficient, accurate reasoning under strict resource constraints, making LLM reasoning practical for mobile scenarios. Videos demonstrating our solution running on mobile devices are available on our project page.
Problem

Research questions and friction points this paper is trying to address.

edge deployment
large language models
reasoning efficiency
KV-cache
on-device inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

LoRA adapters
budget forcing
parallel test-time scaling
dynamic adapter switching
KV-cache sharing
๐Ÿ”Ž Similar Papers
No similar papers found.