Democratizing Agentic AI with Fast Test-Time Scaling on the Edge

📅 2025-08-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Edge devices face severe memory constraints, hindering deployment of high-performance agent AI; small LLMs suffer from limited reasoning capability, while existing test-time scaling (TTS) methods incur prohibitive overhead on edge hardware. To address this, we propose FlashTTS—a lightweight, plug-in library built atop vLLM—introducing three novel techniques: speculative beam expansion, asymmetric multi-model memory allocation, and prefix-aware dynamic scheduling. FlashTTS tightly integrates KV cache reuse, dynamic memory allocation, and fine-grained task scheduling. Evaluated on a single 24GB consumer-grade GPU, it achieves a 2.2× average throughput improvement over the vLLM baseline and reduces end-to-end latency by 38%–68%. Under stringent resource constraints, FlashTTS delivers accuracy and responsiveness comparable to cloud-scale LMs, while simultaneously satisfying privacy preservation and ultra-low-latency requirements for edge AI agents.

Technology Category

Application Category

📝 Abstract
Deploying agentic AI on edge devices is crucial for privacy and responsiveness, but memory constraints typically relegate these systems to smaller Large Language Models (LLMs) with inferior reasoning capabilities. Test-Time Scaling (TTS) can bridge this reasoning gap by dedicating more compute during inference, but existing methods incur prohibitive overhead on edge hardware. To overcome this, we introduce FlashTTS, a serving system that makes TTS practical for memory-constrained LLM reasoning. FlashTTS introduces three synergistic optimizations: (i) Speculative Beam Extension to mitigate system stragglers from irregular reasoning paths; (ii) Asymmetric Multi-Model Memory Allocation to dynamically balance memory between generation and verification; and (iii) Dynamic Prefix-Aware Scheduling to maximize KV-cache reuse. Built as a plug-and-play library for vLLM, FlashTTS enables edge LLMs on a single consumer GPU (24 GB) to match the accuracy and latency of large cloud models. Our evaluation demonstrates that FlashTTS achieves an average 2.2x higher goodput and reduces latency by 38%-68% compared to a vLLM baseline, paving the way for democratized, high-performance agentic AI on edge devices.
Problem

Research questions and friction points this paper is trying to address.

Enabling efficient agentic AI on memory-constrained edge devices
Reducing prohibitive overhead of Test-Time Scaling on edge hardware
Bridging reasoning gap between small and large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Speculative Beam Extension for straggler mitigation
Asymmetric Multi-Model Memory Allocation optimization
Dynamic Prefix-Aware Scheduling for KV-cache reuse
🔎 Similar Papers
No similar papers found.