Introduction

Returning to my old field (see the references below), I am investigating a new broad family of algorithms for optimization and solution of equations.

To demonstrate one of these algorithms we solve the linear system of equations A x = b where A is a square tri-diagonal matrix of size n with "2" on its diagonal and "-1" above and below. The right hand side is chosen so that the solution x* is (1, 2, 3, ...,n) and the starting point is (1, 1, 1, ..., 1). The input is the problem size n and the output is a table of error values and the solution x*. The algorithm stops when the norm of the error is smaller than 10-8.

The algorithms in this family are monotone (in the case of solving linear systems or quadratic minimization) and this one outperforms the 2-point gradient algorithm [2] by an order of magnitude.

Jonathan Barzilai


References

Jonathan Barzilai and Michael A.H. Dempster, "Measuring Rates of Convergence of Numerical Algorithms," Journal of Optimization Theory and Applications, Vol. 78, No. 1, pp.109—125, 1993.

Jonathan Barzilai and Jonathan M. Borwein, "Two-Point Step Size Gradient Methods," IMA Journal of Numerical Analysis, Vol. 8, pp. 141—148, 1988.

Jonathan Barzilai and Aharon Ben-Tal, "Nonpolynomial and Inverse Interpolation for Line Search: Synthesis and Convergence Rates," SIAM J. Numer. Anal., Vol. 19, pp. 1263—1277, 1982.

Jonathan Barzilai and Aharon Ben-Tal, "A Non-Orthogonal Fourier Expansion for Conic Decomposition," Mathematics of Operations Research, Vol. 6, pp. 363—373, 1981.

Jonathan Barzilai, "On Differentiating Hyperosculatory Error Terms," Report No. 408, Center for Cybernetic Studies, The University of Texas at Austin, 1981.


Demonstration

Enter problem size n between 2 and 100: