FISMO: Fisher-Structured Momentum-Orthogonalized Optimizer

📅 2026-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the fundamental trade-off between convergence and efficiency in large-scale non-convex neural network training, where existing optimizers struggle to simultaneously exploit geometric structure and curvature information. We propose FISMO, a novel optimizer that, for the first time, integrates a Kronecker-factored approximation of the Fisher information metric into a momentum orthogonalization framework. This approach overcomes the isotropic limitations of prior methods such as Muon, enabling adaptive, structured preconditioning tailored to the local loss landscape. By combining a trust-region mechanism with a variance-reduced minibatch strategy, FISMO maintains computational tractability while guaranteeing an $O(1/\sqrt{T})$ convergence rate in non-convex stochastic optimization. Empirical results demonstrate that FISMO significantly outperforms state-of-the-art optimizers on both image classification and language modeling tasks, achieving faster convergence and superior final performance.

Technology Category

Application Category

📝 Abstract
Training large-scale neural networks requires solving nonconvex optimization where the choice of optimizer fundamentally determines both convergence behavior and computational efficiency. While adaptive methods like Adam have long dominated practice, the recently proposed Muon optimizer achieves superior performance through orthogonalized momentum updates that enforce isotropic geometry with uniform singular values. However, this strict isotropy discards potentially valuable curvature information encoded in gradient spectra, motivating optimization methods that balance geometric structure with adaptivity. We introduce FISMO (Fisher-Structured Momentum-Orthogonalized) optimizer, which generalizes isotropic updates to incorporate anisotropic curvature information through Fisher information geometry. By reformulating the optimizer update as a trust-region problem constrained by a Kronecker-factored Fisher metric, FISMO achieves structured preconditioning that adapts to local loss landscape geometry while maintaining computational tractability. We establish convergence guarantees for FISMO in stochastic nonconvex settings, proving an $\mathcal{O}(1/\sqrt{T})$ rate for the expected squared gradient norm with explicit characterization of variance reduction through mini-batching. Empirical evaluation on image classification and language modeling benchmarks demonstrates that FISMO achieves superior training efficiency and final performance compared to established baselines.
Problem

Research questions and friction points this paper is trying to address.

nonconvex optimization
adaptive optimization
curvature information
Fisher information geometry
large-scale neural networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fisher information geometry
orthogonalized momentum
anisotropic preconditioning
trust-region optimization
Kronecker-factored approximation
🔎 Similar Papers
No similar papers found.