🤖 AI Summary
This work addresses the limitations of traditional machine learning, which relies on fixed geometric representations and struggles with distribution shifts and task changes, often suffering from catastrophic forgetting. To overcome this, we propose the Metric-Topological Factorization (MTF) framework, conceptualizing intelligence as the ability to dynamically reshape representation geometry. By decoupling plastic metric deformations from stable topological memory, MTF steers target behaviors toward attractors, enabling rapid adaptation without repeated optimization. We instantiate this framework in the Topological Urysohn Machine (TUM), integrating Riemannian geometric deformation, topological encoding, Memory-Amortized Metric Inference (MAMI), and spectral task signatures. TUM reveals, for the first time, the geometric incompleteness of fixed metrics and transcends the stability-plasticity trade-off through geometric switching. Experiments demonstrate significant improvements over continual learning baselines such as EWC under task permutation, mirroring, and parity transformations, effectively mitigating catastrophic forgetting.
📝 Abstract
Contemporary ML often equates intelligence with optimization: searching for solutions within a fixed representational geometry. This works in static regimes but breaks under distributional shift, task permutation, and continual learning, where even mild topological changes can invalidate learned solutions and trigger catastrophic forgetting. We propose Metric-Topology Factorization (MTF) as a unifying geometric principle: intelligence is not navigation through a fixed maze, but the ability to reshape representational geometry so desired behaviors become stable attractors. Learning corresponds to metric contraction (a controlled deformation of Riemannian structure), while task identity and environmental variation are encoded topologically and stored separately in memory. We show any fixed metric is geometrically incomplete: for any local metric representation, some topological transformations make it singular or incoherent, implying an unavoidable stability-plasticity tradeoff for weight-based systems. MTF resolves this by factorizing stable topology from plastic metric warps, enabling rapid adaptation via geometric switching rather than re-optimization. Building on this, we introduce the Topological Urysohn Machine (TUM), implementing MTF through memory-amortized metric inference (MAMI): spectral task signatures index amortized metric transformations, letting a single learned geometry be reused across permuted, reflected, or parity-altered environments. This explains robustness to task reordering, resistance to catastrophic forgetting, and generalization across transformations that defeat conventional continual learning methods (e.g., EWC).