Programs as Singularities

📅 2025-04-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work establishes a rigorous correspondence between Turing machine architectures and the geometry of singularities of real-analytic functions, aiming to deepen the geometric understanding of “simplicity” and Occam’s Razor in Bayesian induction. Method: Discrete Turing machines are encoded into a smooth parameter space, where the negative log-likelihood serves as a potential function whose critical points correspond to machine configurations; local singularity types—such as asymptotic order and multiplicity—are linked to internal algorithmic structure via Taylor expansion, and analyzed using Ehrhard–Regnier derivatives, Watanabe’s singular learning theory, and real-analytic geometry. Contribution/Results: We prove, for the first time, that functionally equivalent algorithms induce distinguishable singularity geometries in parameter space, enabling Bayesian posteriors to identify algorithmic essence; we reconstruct the singular-geometric foundation of Occam’s Razor, yielding a rigorous statistical and differential-geometric characterization of algorithmic identifiability and simplicity in inductive inference.

Technology Category

Application Category

📝 Abstract
We develop a correspondence between the structure of Turing machines and the structure of singularities of real analytic functions, based on connecting the Ehrhard-Regnier derivative from linear logic with the role of geometry in Watanabe's singular learning theory. The correspondence works by embedding ordinary (discrete) Turing machine codes into a family of noisy codes which form a smooth parameter space. On this parameter space we consider a potential function which has Turing machines as critical points. By relating the Taylor series expansion of this potential at such a critical point to combinatorics of error syndromes, we relate the local geometry to internal structure of the Turing machine. The potential in question is the negative log-likelihood for a statistical model, so that the structure of the Turing machine and its associated singularity is further related to Bayesian inference. Two algorithms that produce the same predictive function can nonetheless correspond to singularities with different geometries, which implies that the Bayesian posterior can discriminate between distinct algorithmic implementations, contrary to a purely functional view of inference. In the context of singular learning theory our results point to a more nuanced understanding of Occam's razor and the meaning of simplicity in inductive inference.
Problem

Research questions and friction points this paper is trying to address.

Linking Turing machines to singularities of analytic functions
Relating Turing machine structure to Bayesian inference
Exploring algorithmic discrimination via singularity geometry
Innovation

Methods, ideas, or system contributions that make the work stand out.

Embed Turing codes into noisy smooth parameter space
Relate potential Taylor series to error syndromes
Link Bayesian inference to singularity geometry
🔎 Similar Papers
No similar papers found.