Diminishing Returns in Expanding Generative Models and Godel-Tarski-Lob Limits

📅 2026-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the fundamental limits of capability growth in generative models under continual scaling of capacity, data, and compute. By introducing a task-space framework that integrates algorithmic probability with formal logic—including Rosser’s incompleteness, Tarski’s undefinability, and Löb’s theorem—the work establishes, for the first time, Gödelian constraints on the boundaries of generative model capabilities. Theoretically, it proves that as model scale increases, the marginal gain in task performance asymptotically approaches zero, and an insurmountable lower bound exists for logically unsolvable tasks. Furthermore, the study quantifies an upper bound on performance improvement in predictive settings. These results reveal intrinsic bottlenecks and inherent limitations governing the scalability of generative systems.

Technology Category

Application Category

📝 Abstract
Modern generative modelling systems are increasingly improved by expanding model capacity, training data, and computational resources. While empirical studies have documented such scaling behaviour across architectures including generative adversarial networks, variational autoencoders, transformer-based models, and diffusion models, the theoretical limits of capability growth in expanding generative systems remain poorly understood. In this paper we develop a general task-space framework for analysing expanding generative reasoning systems. Each system induces a subset of a global task space representing the tasks it can successfully solve, and system capability is measured by the probability mass of this solved-task set under a fixed task distribution. Within this framework we prove a structural result showing that, under mild assumptions, the marginal improvement in solved tasks must converge to zero as system capacity increases. Thus expanding generative systems may continue to gain capability, but the probability mass of newly solvable tasks necessarily diminishes asymptotically. We further provide a prediction-theoretic refinement based on complexity-weighted hypothesis classes inspired by algorithmic probability, yielding quantitative bounds on marginal improvement in prediction settings. Finally, we examine logical reasoning tasks and show that classical results from mathematical logic -- including Rosser incompleteness, Tarski's undefinability theorem, and Löb's theorem -- imply the persistence of unresolved logical tasks within sufficiently expressive reasoning systems. Together these results provide a mathematical perspective on the asymptotic behaviour of expanding generative systems, showing that long-run capability growth is constrained both by diminishing marginal improvements in task coverage and by fundamental logical limitations on internal reasoning.
Problem

Research questions and friction points this paper is trying to address.

generative models
scaling limits
diminishing returns
task coverage
logical incompleteness
Innovation

Methods, ideas, or system contributions that make the work stand out.

diminishing returns
generative models
task-space framework
algorithmic probability
logical incompleteness
🔎 Similar Papers
No similar papers found.