Learning outcomes

This course is an introduction to the basic concepts of Optimization. Two questions are examined: how to characterize an extremum and how to conceive a numerical method for finding it.

Goals

The course focuses on the development and the study of numerical algorithms to solve unconstrained and constrained nonlinear optimization problems.

Content

Considering continuous but not necessarily convex unconstrained and constrained nonlinear optimization problems, the first and most important part of the course is devoted to the unconstrained case. After studying the characterization of minima for a generic unconstrained optimization problem, we develop the main ideas behind the so-called line-search and trust-region approaches to globalize first- and second-order methods such as the steepest descent method, the Newton method and quasi-Newton methods, with some insight on both convergence and numerical considerations. The second part of the course is devoted to the constrained case and derives the Karush-Kuhn-Tucker conditions, together with the key ideas of some well-known methods, among which the sequential quadratic programming method, the augmented Lagrangian method and the interior point method.
 

Assessment method

Two tests: the first one is oral for the theory and the second one is written for the exercises.

Sources, references and any support material

Numerical Optimization (second edition), Jorge Nocedal and Stephen J. Wright Springer, New York, 2006

Language of instruction

French