Improved Convergence Rates of Anderson Acceleration for a Large Class of Fixed-Point Iterations

Data Science Seminar

Casey Garner (University of Minnesota)


Anderson acceleration (AA) has become a classical method for attempting to accelerate the convergence of fixed-point iterations; however, though the method was first proposed in the 1960's and in practice displayed immense benefit, it was not until 2015 that Toth and Kelley proved AA is at least as good as the fixed-point method. Since their work, many papers have come out attempting to better understand the convergence behavior of AA. In this talk, we present the first proof showcasing AA has an improved root-linear convergence factor over the fixed-point iteration for the case of linear and symmetric operators. We further present numerical experiments which showcase AA's superior performance over the standard fixed-point method for Tyler's M-estimation. 

Start date
Tuesday, Feb. 27, 2024, 1:25 p.m.
End date
Tuesday, Feb. 27, 2024, 2:25 p.m.

Lind Hall 325 or via Zoom

Zoom registration