Abstract:
This project presents a new approach to Quasi-Newton methods for
unconstrained optimization. Quasi-Newton Methods update at each iteration the
existing Hessian approximation (or its inverse) cheaply by integrating data derived
from the previously completed one, which is soon ignored. These methods are based
on the so-called Secant equation. In our project we focus on solving a critical
subproblem of the Quasi-Newton algorithm that requires determining a proper,
suitable step size that takes from the current approximation to the minimum to a new
'better' one. The subproblem can either be posed as doing a Line Search along some
generated search direction in order to determine a minimum along the search vector.
Another technique, on which we focus primarily in this work, is to use a Trust Region
method that directly computes the step vector without doing a focused Line Search.
The subproblem is critical to the numerical success of Q-N methods. We emphasize
features of successful implementation to pinpoint assess merits of Trust Region
methods. Our Numerical Results reveal that Trust Region algorithms seem to
markedly improve as the dimension of the problem increases, while for small dimensional problems performance of both methods is comparable.