π€ AI Summary
This work addresses the inherent limitations of discrete token modeling by proposing a paradigm shift to language modeling in continuous latent spaces. To this end, we introduce TarFlowLMβa Transformer-based framework that integrates autoregressive normalizing flows for probabilistic density estimation in a continuous latent space. We propose an alternating-direction stacked autoregressive transformation to enable global bidirectional context modeling, and design hybrid coupling layers that explicitly capture complex dependencies induced by discrete linguistic structures while establishing theoretical connections to conventional discrete models. The framework supports block-wise generation, multi-level decoding, and flexible context utilization. Empirical evaluation on multiple standard language modeling benchmarks demonstrates that TarFlowLM achieves significantly higher likelihoods than strong baselines, validating the dual advantages of continuous latent-space modeling: enhanced expressivity and improved generative flexibility.
π Abstract
Autoregressive models have driven remarkable progress in language modeling. Their foundational reliance on discrete tokens, unidirectional context, and single-pass decoding, while central to their success, also inspires the exploration of a design space that could offer new axes of modeling flexibility. In this work, we explore an alternative paradigm, shifting language modeling from a discrete token space to a continuous latent space. We propose a novel framework TarFlowLM, that employs transformer-based autoregressive normalizing flows to model these continuous representations. This approach unlocks substantial flexibility, enabling the construction of models that can capture global bi-directional context through stacked, alternating-direction autoregressive transformations, support block-wise generation with flexible token patch sizes, and facilitate a hierarchical multi-pass generation process. We further propose new mixture-based coupling transformations designed to capture complex dependencies within the latent space shaped by discrete data, and demonstrate theoretical connections to conventional discrete autoregressive models. Extensive experiments on language modeling benchmarks demonstrate strong likelihood performance and highlight the flexible modeling capabilities inherent in our framework.