🤖 AI Summary
This work proposes a learnable Krasnosel’skii–Mann iteration framework for solving fixed-point problems involving nonexpansive mappings, aiming to accelerate convergence in the average case while preserving theoretical convergence guarantees. By introducing learnable perturbations that satisfy a summability condition, the method achieves locally linear convergence with vanishing bias under a metric subregularity assumption. As the first approach to integrate learning-to-optimize (L2O) principles into fixed-point iterations, the framework is compatible with any iterative scheme exhibiting rapid local convergence. It has been successfully applied to operator splitting algorithms such as Douglas–Rachford, demonstrating significant acceleration in solving structured monotone inclusions and best approximation problems.
📝 Abstract
We introduce a principled learning to optimize (L2O) framework for solving fixed-point problems involving general nonexpansive mappings. Our idea is to deliberately inject summable perturbations into a standard Krasnosel'skii-Mann iteration to improve its average-case performance over a specific distribution of problems while retaining its convergence guarantees. Under a metric sub-regularity assumption, we prove that the proposed parametrization includes only iterations that locally achieve linear convergence-up to a vanishing bias term-and that it encompasses all iterations that do so at a sufficiently fast rate. We then demonstrate how our framework can be used to augment several widely-used operator splitting methods to accelerate the solution of structured monotone inclusion problems, and validate our approach on a best approximation problem using an L2O-augmented Douglas-Rachford splitting algorithm.