๐ค AI Summary
Existing audio autoencoders struggle to simultaneously achieve high compression ratios and high-fidelity reconstruction, while continuous embeddings and discrete token representations often involve trade-offs. This paper proposes CoDiCodecโthe first unified audio autoencoder capable of **jointly outputting both continuous embeddings and discrete tokens**. Its core innovation is FSQ-dropout, an end-to-end trainable mechanism requiring no auxiliary loss; combined with finite scalar quantization (FSQ), consistency regularization, and parallel decoding, it achieves efficient compression at ~11 Hz continuous sampling rate and 2.38 kbps discrete bitrate. Experiments demonstrate that CoDiCodec significantly outperforms state-of-the-art continuous and discrete models in reconstruction quality at comparable bitrates. Moreover, it supports multitask generative applications while delivering high compression efficiency, low latency, and superior perceptual fidelity.
๐ Abstract
Efficiently representing audio signals in a compressed latent space is critical for latent generative modelling. However, existing autoencoders often force a choice between continuous embeddings and discrete tokens. Furthermore, achieving high compression ratios while maintaining audio fidelity remains a challenge. We introduce CoDiCodec, a novel audio autoencoder that overcomes these limitations by both efficiently encoding global features via summary embeddings, and by producing both compressed continuous embeddings at ~ 11 Hz and discrete tokens at a rate of 2.38 kbps from the same trained model, offering unprecedented flexibility for different downstream generative tasks. This is achieved through Finite Scalar Quantization (FSQ) and a novel FSQ-dropout technique, and does not require additional loss terms beyond the single consistency loss used for end-to-end training. CoDiCodec supports both autoregressive decoding and a novel parallel decoding strategy, with the latter achieving superior audio quality and faster decoding. CoDiCodec outperforms existing continuous and discrete autoencoders at similar bitrates in terms of reconstruction audio quality. Our work enables a unified approach to audio compression, bridging the gap between continuous and discrete generative modelling paradigms.