Learning outcomes

This course is an introduction to the basic concepts of Numerical optimization. Two questions are examined: how to characterize an extremum and how to conceive a numerical method for finding it.

Goals

The course focuses on the development and the study of numerical algorithms to solve unconstrained and constrained nonlinear optimization problems.

Content

Considering continuous but not necessarily convex unconstrained and constrained nonlinear optimization problems, the first and most important part of the course is devoted to the unconstrained case. After studying the characterization of minima for a generic unconstrained optimization problem, we develop the main ideas behind the so-called line-search and trust-region approaches to globalize first- and second-order methods such as the steepest descent method, the Newton method and quasi-Newton methods, with some insight on both convergence and numerical considerations. The second part of the course is devoted to the constrained case and derives the Karush-Kuhn-Tucker conditions, together with the key ideas of some well-known methods, among which the sequential quadratic programming method, the augmented Lagrangian method and the interior point method.
 

Table of contents

Introduction

Partie I : Unconstrained Optimization
  A. Optimality Conditions
  B. Overview of Algorithms
  C. Line Search Methods
  D. Trust-Region Methods
  E. Calculating Derivatives
  F. Least-Squares Problems
  G. Nonlinear Equations

Partie II : Constrained Optimization
  A. Optimality Conditions
  B. Overview of Algorithms

Assessment method

Two tests: the first one on the theory is an oral exam and the second one for the exercises is either a written exam on computer or a work to be presented orally.

Sources, references and any support material

Numerical Optimization (second edition), Jorge Nocedal and Stephen J. Wright Springer, New York, 2006

Language of instruction

English