Music2Latent2: Audio Compression with Summary Embeddings and Autoregressive Decoding

📅 2025-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing audio autoencoders struggle to simultaneously preserve fidelity and cross-segment coherence under high compression ratios, limiting performance in downstream tasks such as music generation and retrieval. To address this, we propose a novel audio autoencoding framework: (1) replacing conventional sequential local encodings with unordered summary embeddings for compact, high-dimensional audio representation; (2) introducing a causal masked autoregressive consistency training scheme that supports variable-length inputs; and (3) employing a two-stage progressive denoising decoder that enhances reconstruction quality without additional computational overhead. Experiments demonstrate that our method significantly improves audio fidelity at equivalent compression ratios and consistently outperforms state-of-the-art continuous autoencoders across multiple music information retrieval (MIR) benchmarks.

Technology Category

Application Category

📝 Abstract
Efficiently compressing high-dimensional audio signals into a compact and informative latent space is crucial for various tasks, including generative modeling and music information retrieval (MIR). Existing audio autoencoders, however, often struggle to achieve high compression ratios while preserving audio fidelity and facilitating efficient downstream applications. We introduce Music2Latent2, a novel audio autoencoder that addresses these limitations by leveraging consistency models and a novel approach to representation learning based on unordered latent embeddings, which we call summary embeddings. Unlike conventional methods that encode local audio features into ordered sequences, Music2Latent2 compresses audio signals into sets of summary embeddings, where each embedding can capture distinct global features of the input sample. This enables to achieve higher reconstruction quality at the same compression ratio. To handle arbitrary audio lengths, Music2Latent2 employs an autoregressive consistency model trained on two consecutive audio chunks with causal masking, ensuring coherent reconstruction across segment boundaries. Additionally, we propose a novel two-step decoding procedure that leverages the denoising capabilities of consistency models to further refine the generated audio at no additional cost. Our experiments demonstrate that Music2Latent2 outperforms existing continuous audio autoencoders regarding audio quality and performance on downstream tasks. Music2Latent2 paves the way for new possibilities in audio compression.
Problem

Research questions and friction points this paper is trying to address.

Audio Coding
Quality Degradation
Efficiency Shortage
Innovation

Methods, ideas, or system contributions that make the work stand out.

Consistency Model
Summary Embedding
Two-Step Decoding
🔎 Similar Papers
No similar papers found.