🤖 AI Summary
This work addresses the high inference latency and low throughput of text-to-audio diffusion models, which typically require dozens of denoising steps. The authors propose a training-free, model-agnostic acceleration framework that leverages a joint semantic and audio-duration-aware caching mechanism to align and reuse previously generated outputs as warm starts, dynamically skipping redundant denoising steps. By integrating dynamic step-skipping gating, quality-aware cache eviction, and refinement strategies, the method achieves 1.8–3.0× latency reduction on real-world audio data with only approximately 1K cache entries, while maintaining or even improving perceptual audio quality.
📝 Abstract
Text-to-audio diffusion models produce high-fidelity audio but require tens of function evaluations (NFEs), incurring multi-second latency and limited throughput. We present SoundWeaver, the first training-free, model-agnostic serving system that accelerates text-to-audio diffusion by warm-starting from semantically similar cached audio. SoundWeaver introduces three components: a Reference Selector that retrieves and temporally aligns cached candidates via semantic and duration-aware gating; a Skip Gater that dynamically determines the percentage of NFEs to skip; and a lightweight Cache Manager that maintains cache utility through quality-aware eviction and refinement. On real-world audio traces, SoundWeaver achieves 1.8--3.0$ \times $ latency reduction with a cache of only ${\sim}$1K entries while preserving or improving perceptual quality.