A Theory of the Mechanics of Information: Generalization Through Measurement of Uncertainty (Learning is Measuring)

📅 2025-10-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional machine learning relies on explicit parametric models and strong distributional assumptions, limiting flexibility, interpretability, and adaptability to dynamic data updates. To address these limitations, this paper proposes a model-free information-theoretic framework—*information mechanics*—that directly quantifies uncertainty in raw data via surprisal, bypassing explicit probabilistic modeling. The approach is parameter-free, traceable, and editable (supporting incremental data addition/deletion), enabling universal inference. Its core conceptual contribution redefines learning as uncertainty measurement and establishes fundamental “physical laws” at the information level, unifying diverse tasks including generative inference, causal discovery, anomaly detection, and time-series forecasting. Experiments demonstrate performance at or near state-of-the-art across multiple benchmarks; the method natively handles missing data and preserves full human interpretability throughout the inference process.

Technology Category

Application Category

📝 Abstract
Traditional machine learning relies on explicit models and domain assumptions, limiting flexibility and interpretability. We introduce a model-free framework using surprisal (information theoretic uncertainty) to directly analyze and perform inferences from raw data, eliminating distribution modeling, reducing bias, and enabling efficient updates including direct edits and deletion of training data. By quantifying relevance through uncertainty, the approach enables generalizable inference across tasks including generative inference, causal discovery, anomaly detection, and time series forecasting. It emphasizes traceability, interpretability, and data-driven decision making, offering a unified, human-understandable framework for machine learning, and achieves at or near state-of-the-art performance across most common machine learning tasks. The mathematical foundations create a ``physics'' of information, which enable these techniques to apply effectively to a wide variety of complex data types, including missing data. Empirical results indicate that this may be a viable alternative path to neural networks with regard to scalable machine learning and artificial intelligence that can maintain human understandability of the underlying mechanics.
Problem

Research questions and friction points this paper is trying to address.

Introduces model-free framework using surprisal for direct data inference
Enables generalizable inference across multiple machine learning tasks
Provides interpretable alternative to neural networks with comparable performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Model-free framework using surprisal for inference
Quantifying relevance through uncertainty for generalization
Mathematical foundations creating physics of information
🔎 Similar Papers
No similar papers found.
C
Christopher J. Hazard
Howso Incorporated
M
Michael Resnick
Howso Incorporated
J
Jacob Beel
Howso Incorporated
J
Jack Xia
Howso Incorporated
C
Cade Mack
Howso Incorporated
D
Dominic Glennie
Howso Incorporated
M
Matthew Fulp
Howso Incorporated
D
David Maze
Howso Incorporated
Andrew Bassett
Andrew Bassett
Wellcome Trust Sanger Institute, Cambridge
Genome EditingCRISPRiPSCChromatinnon-coding RNA
M
Martin Koistinen
Howso Incorporated