Sports Biomechanics Lab > Lab Meetings > Nonlinear Programming (NLP)
Personal tools

Nonlinear Programming (NLP)

Lab Presentation on Nonlinear Programming (Constrained, Nonlinear Optimization)

What is NLP?

Nonlinear programming refers to the an optimization scenario in which: the objective function is possibly nonlinear, and there are equality and/or inequality constraints which are possibly nonlinear.

My interest is in applying it to optimal control problems, which can have quite complicated nonlinear dynamics and constraints.

The Math:

The nonlinear, in/equality constrained optimization problem can be written as:

\(\quad\quad\quad min_x  f (x)\)

subject to \( h(x) = 0\)

\(\quad\quad\quad\quad\quad g(x) \leq 0 \)

Where x is a vector of length n, f(x) returns a scalar, h(x) is a vector of length t, and g(x) is a vector of length m

How to solve this?

 First, look at a simpler problem: unconstrained optimization

\(\quad\quad\quad min_x  f (x)\)

The unconstrained case has two approaches: the line search, and the trust region method.

Line Search

With the line search, we rearrange this to the 1-dimensional optimization problem:

\( \min_{\alpha > 0} f (x + \alpha p) \), where \(p\) is search direction, commonly \(p = - \nabla f \)

This is repeated until a stopping condition is met.

Nocedal & Wright 2006, Line Search, p 42

Trust Region

With the trust region method, we approximate \( f \) with a quadratic model \( m \), then:

\( \min_{p} m (x+p) \), where \( x+p \) is within the trust region \( \Delta \)

The model can be defined as: \( m (x + p) = f + p^T \nabla f + \frac{1}{2}p^T \nabla^2 p \)

\( p \) can be crudely minimized by defining \( p = - \frac{\nabla f}{\parallel \nabla f \parallel} \Delta \)

There are additional subtleties to this procedure, involving the trust region \( \Delta \), and the step definition \(p\).

Nocedal & Wright 2006, Trust Region, p 72

Stopping Conditions

For the unconstrained problem, we examine the gradient of the objective function \( f (x) \), and when it is 0, we are at a local minimum. 

This is a common stopping condition, although stopping conditions can be more complex.

Constrained Optimization

The next step is to add equality constraints: \( h (x) = 0\)

We then write a new function, which is the Lagrangian:

\( \mathcal{L} (x, \lambda) = f (x) + \lambda^T h(x) \)

where \( \lambda \) is a vector of length t.

The solution to this problem can be found with the following iterative process:

1) \( \min_x \mathcal{L} (x, \lambda) \)

2) update \( \lambda \) estimates

For inequality constraints, we will consider the interior-point method:

We add slack variables, changing the inequality constraints to equality constraints:

\( g (x) \leq 0 \Rightarrow (g (x) + s) = 0, s \geq 0 \)

We then rewrite our Lagrangian in a way which incorporates this:

\( \mathcal{L} (x, s, \lambda_h, \lambda_g) = f (x) - \mu \sum_{i=1}^m \ln s_i + \lambda_h^T h(x) + \lambda_g^T (g(x) + s) \)

We have now introduced a penalty parameter, \( \mu \), which is progressively decreased.

This allows the inequality boundaries to be reached in a controlled rate.

Final Algorithm

while \( E(x, s) > \epsilon_{TOL} \)

    while \( E(x, s; \mu) > \epsilon_{\mu} \)

        compute normal step

        compute new \( \lambda \)

        compute hessian

        compute tangential step

        if step passes merit function test

            take step


            try second order correction

            maybe take step

            reduce trust region radius

    reduce \(\mu, \epsilon_{\mu} \)


Normal Step

The normal step attempts to satisfy the linearized constraints in the problem.

It does so in a manner similar to the basic trust region method; here the quadratic model to minimize is based of linearized constraints.

Tangential Step

The tangential step attempts to find optimality while moving tangentially to the gradients of the constraints.

This is accomplished by using the Projected Conjugate Gradient method. Here the objective function does not contain the constraints.

Instead, all steps are "projected" through a matrix which aligns them with the gradient of the constraints.

Test Cases

This is a previously covered optimal control example:

Minimize the total energy consumed by a double integrator, from t=0 to t=1, meeting constraints.

State Space Equations:

$$ \dot{x} = v $$

$$ \dot{v} = u $$

$$ \dot{w} = \frac{1}{2}u^2 $$

Objective Function:

$$ \min_u w(1) $$

Equality Constraints:

$$ x(0) = 0, \quad x(1) = 0 $$

$$ v(0) = 1, \quad v(1) = -1 $$

$$ w(0) = 0 $$

Inequality Constraints:

$$ l - x(t) \geq 0$$


Using the Direct Collocation method, with 40 time points we get:

160 variables to optimize

122 equality constraints

40 inequality constraints

When solving, hit an iteration limit of 250 in 20 seconds.

The error function at this point is less than 1.e-4

The error function is basically the maximum violation of KKT conditions.

2500 iterations of 160 variables takens around 2.5 minutes, and reduces error function to ~1. e-5

1000 iterations of 320 variables takes around 3.5 minutes


Optimization Test Case, Position
Optimization Test Case, Velocity
Optimization Test Case, Energy Output
Optimization Test Case, Thrust

Previously computed results using MATLAB's solvers:

Optimization Test Case, Old Matlab Example


1. Byrd, R. H., Hribar, M. E., & Nocedal, J. (1999). An interior point algorithm for large-scale nonlinear programming ∗. Society, 9(4), 877-900.

2. Nocedal, J., & Wright, S. (2006). Numerical Optimization

3. Von Stryk, O. (1991). Numerical solution of optimal control problems by direct collocation. International Series of Numerical Mathematics (Vol. 111, pp. 1-13). Citeseer. Retrieved from

Document Actions