Learning Algorithms in the Limit

📅 2025-06-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates the learnability of computable functions in the limit, focusing on the applicability of Gold’s inductive inference framework under realistic computational constraints—specifically, restricted input sources and bounded computational complexity. It establishes that general recursive functions are not learnable in the limit from input-output observations alone. To overcome this limitation, the authors introduce two novel observational modalities: time-bounded execution traces and strategy trajectory observations. This work is the first to integrate computational observability and constrained input modeling into limit learning theory. It demonstrates that strategy trajectory learning is reducible to rational function learning and proves its equivalence to finite-state transducer inference. The theoretical toolkit includes computability analysis, complexity-constrained modeling, and existence proofs for characteristic sets. A key result shows that the class of linear-time computable functions admits neither a computable nor a polynomial-quality characteristic set.

Technology Category

Application Category

📝 Abstract
This paper studies the problem of learning computable functions in the limit by extending Gold's inductive inference framework to incorporate extit{computational observations} and extit{restricted input sources}. Complimentary to the traditional Input-Output Observations, we introduce Time-Bound Observations, and Policy-Trajectory Observations to study the learnability of general recursive functions under more realistic constraints. While input-output observations do not suffice for learning the class of general recursive functions in the limit, we overcome this learning barrier by imposing computational complexity constraints or supplementing with approximate time-bound observations. Further, we build a formal framework around observations of extit{computational agents} and show that learning computable functions from policy trajectories reduces to learning rational functions from input and output, thereby revealing interesting connections to finite-state transducer inference. On the negative side, we show that computable or polynomial-mass characteristic sets cannot exist for the class of linear-time computable functions even for policy-trajectory observations.
Problem

Research questions and friction points this paper is trying to address.

Extends Gold's framework to learn computable functions with computational constraints
Introduces new observation types to study learnability under realistic limitations
Analyzes learning barriers and connections to finite-state transducer inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extends Gold's framework with computational observations
Introduces Time-Bound and Policy-Trajectory Observations
Reduces policy-trajectory learning to rational functions
🔎 Similar Papers
2024-07-25arXiv.orgCitations: 0