🤖 AI Summary
Traditional normalizing flows (NFs) are constrained by the requirement that forward transformations must be explicitly invertible, necessitating inefficient autoregressive decoding for inverse sampling—a key performance bottleneck. This paper proposes Bidirectional Normalizing Flows (BiFlow), the first NF framework to relax explicit invertibility: it jointly learns a forward transformation and a trainable approximate inverse mapping, enabling non-causal, highly parallel reverse modeling. BiFlow integrates Transformer architectures with autoregressive flow components, supporting flexible loss design and non-causal network topologies. On ImageNet, BiFlow achieves superior generation quality compared to prior NF methods, accelerates sampling by two orders of magnitude—matching the efficiency of 1-NFE approaches—and establishes new state-of-the-art performance among NF-based models.
📝 Abstract
Normalizing Flows (NFs) have been established as a principled framework for generative modeling. Standard NFs consist of a forward process and a reverse process: the forward process maps data to noise, while the reverse process generates samples by inverting it. Typical NF forward transformations are constrained by explicit invertibility, ensuring that the reverse process can serve as their exact analytic inverse. Recent developments in TARFlow and its variants have revitalized NF methods by combining Transformers and autoregressive flows, but have also exposed causal decoding as a major bottleneck. In this work, we introduce Bidirectional Normalizing Flow ($ extbf{BiFlow}$), a framework that removes the need for an exact analytic inverse. BiFlow learns a reverse model that approximates the underlying noise-to-data inverse mapping, enabling more flexible loss functions and architectures. Experiments on ImageNet demonstrate that BiFlow, compared to its causal decoding counterpart, improves generation quality while accelerating sampling by up to two orders of magnitude. BiFlow yields state-of-the-art results among NF-based methods and competitive performance among single-evaluation ("1-NFE") methods. Following recent encouraging progress on NFs, we hope our work will draw further attention to this classical paradigm.