Stop settling for vanilla fixed-point updates. Optimization algorithms often reduce to fixed point steps, and there's a toolbox of options.
🔎 What's a fixed point?
Consider a function T(x) mapping a Euclidean space back onto the Euclidean space. We say a point x* is a fixed point of T if
x* = T(x*),
i.e. applying T yields what you input. And, these fixed points are precisely the solutions to our optimization problems.
📓 Fixed Point Iterations
Here are formulas for a few schemes for finding fixed points. Brief descriptions follow.
🔹 Banach–Picard
This is the vanilla update most people are familiar with where you repeatedly apply T to your current point. Each step shrinks the distance to the fixed point by the same factor.
🔹 Krasnosel'skiĭ-Mann (KM)
Take a weighted average of your current point and T(current), gently steering toward a fixed point without overshooting.
🔹 Fast KM
Like KM, but also adds a slice of the previous movement to speed up average progress.
🔹 Heavy-Ball
Add “momentum” by applying T not to x^k, but to x^k plus a fraction of your last step, giving extra push.
🔹 Halpern
At each update, mix in a fixed anchor point u and then apply T, gradually shifting all the weight onto T to get closest solution to u.
🔹 Viscosity Approximation
Blend a simple contraction f(x) with T(x) each step—driving iterates toward a fixed point of T that satisfies a variational inequality.
🔹 Ishikawa
Do a two-stage move: first mix toward T(x) to get y, then apply T to y and blend back with your original x for extra stability.
The assumptions and trajectories of each scheme vary, and so ought be chosen with your application in mind. Here’s a link to a .pdf of this image and linked references.
🔭 Looking ahead
There are also more sophisticated quasi-Newton style schemes like Anderson Acceleration and SuperMann for fixed point iteration.