🤖 AI Summary
Edge devices face severe memory constraints, hindering deployment of high-performance agent AI; small LLMs suffer from limited reasoning capability, while existing test-time scaling (TTS) methods incur prohibitive overhead on edge hardware. To address this, we propose FlashTTS—a lightweight, plug-in library built atop vLLM—introducing three novel techniques: speculative beam expansion, asymmetric multi-model memory allocation, and prefix-aware dynamic scheduling. FlashTTS tightly integrates KV cache reuse, dynamic memory allocation, and fine-grained task scheduling. Evaluated on a single 24GB consumer-grade GPU, it achieves a 2.2× average throughput improvement over the vLLM baseline and reduces end-to-end latency by 38%–68%. Under stringent resource constraints, FlashTTS delivers accuracy and responsiveness comparable to cloud-scale LMs, while simultaneously satisfying privacy preservation and ultra-low-latency requirements for edge AI agents.
📝 Abstract
Deploying agentic AI on edge devices is crucial for privacy and responsiveness, but memory constraints typically relegate these systems to smaller Large Language Models (LLMs) with inferior reasoning capabilities. Test-Time Scaling (TTS) can bridge this reasoning gap by dedicating more compute during inference, but existing methods incur prohibitive overhead on edge hardware. To overcome this, we introduce FlashTTS, a serving system that makes TTS practical for memory-constrained LLM reasoning. FlashTTS introduces three synergistic optimizations: (i) Speculative Beam Extension to mitigate system stragglers from irregular reasoning paths; (ii) Asymmetric Multi-Model Memory Allocation to dynamically balance memory between generation and verification; and (iii) Dynamic Prefix-Aware Scheduling to maximize KV-cache reuse. Built as a plug-and-play library for vLLM, FlashTTS enables edge LLMs on a single consumer GPU (24 GB) to match the accuracy and latency of large cloud models. Our evaluation demonstrates that FlashTTS achieves an average 2.2x higher goodput and reduces latency by 38%-68% compared to a vLLM baseline, paving the way for democratized, high-performance agentic AI on edge devices.