Generation is Required for Data-Efficient Perception

📅 2025-12-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates whether generative modeling is structurally necessary for data-efficient, human-like visual perception—particularly compositional generalization. Theoretically, we establish for the first time that, under compositional data-generating mechanisms, generative approaches inherently encode critical inductive biases via decoder constraints and inverse inference, whereas discriminative methods cannot replicate these biases equivalently through regularization or architectural design alone. Methodologically, we integrate gradient-based online search, generative replay, decoder-constrained modeling, and theory-driven inductive bias analysis. Experiments on photorealistic image datasets demonstrate that—even with minimal decoder design—generative models achieve substantial gains in compositional generalization, without requiring additional data, large-scale pretraining, or supervised signals. These results empirically validate the structural necessity of generative capacity for human-like visual generalization.

Technology Category

Application Category

📝 Abstract
It has been hypothesized that human-level visual perception requires a generative approach in which internal representations result from inverting a decoder. Yet today's most successful vision models are non-generative, relying on an encoder that maps images to representations without decoder inversion. This raises the question of whether generation is, in fact, necessary for machines to achieve human-level visual perception. To address this, we study whether generative and non-generative methods can achieve compositional generalization, a hallmark of human perception. Under a compositional data generating process, we formalize the inductive biases required to guarantee compositional generalization in decoder-based (generative) and encoder-based (non-generative) methods. We then show theoretically that enforcing these inductive biases on encoders is generally infeasible using regularization or architectural constraints. In contrast, for generative methods, the inductive biases can be enforced straightforwardly, thereby enabling compositional generalization by constraining a decoder and inverting it. We highlight how this inversion can be performed efficiently, either online through gradient-based search or offline through generative replay. We examine the empirical implications of our theory by training a range of generative and non-generative methods on photorealistic image datasets. We find that, without the necessary inductive biases, non-generative methods often fail to generalize compositionally and require large-scale pretraining or added supervision to improve generalization. By comparison, generative methods yield significant improvements in compositional generalization, without requiring additional data, by leveraging suitable inductive biases on a decoder along with search and replay.
Problem

Research questions and friction points this paper is trying to address.

Examines if generative models enable compositional generalization in vision
Compares generative and non-generative methods for human-level visual perception
Shows generative methods enforce inductive biases without extra data
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generative methods enforce inductive biases via decoder constraints
Efficient inversion achieved through gradient-based search or generative replay
Generative approach enables compositional generalization without extra data
🔎 Similar Papers
No similar papers found.
J
Jack Brady
Max Planck Institute for Intelligent Systems, Tübingen; Tübingen AI Center; ELLIS Institute, Tübingen
B
Bernhard Scholkopf
Max Planck Institute for Intelligent Systems, Tübingen; Tübingen AI Center; ELLIS Institute, Tübingen
T
Thomas Kipf
Google DeepMind
Simon Buchholz
Simon Buchholz
Max Planck Institute for Intelligent Systems
Wieland Brendel
Wieland Brendel
Fellow at ELLIS Institut Tübingen, Group Leader, Max Planck Institute for Intelligent Systems
machine learningcomputer vision