Skip to content
Snippets Groups Projects

Lecture: The indirect simple shooting method

  • Author: Olivier Cots
  • Date: March 2021


Abstract

We present in this notebook the indirect simple shooting method based on the Pontryagin Maximum Principle (PMP) to solve a smooth optimal control problem. By smooth, we mean that the maximization condition of the PMP gives a control law in feedback form (i.e. with respect to the state and the costate) at least continuously differentiable.

We use the nutopy package to solve the optimal control problem by simple shooting. You can find another smooth example with more details about the use of nutopy at this page: note that in this example, the nutopy package is interoperated with the bocop software, implementing a direct collocation method.

Goal

The goal of this presentation is that at the end, you will be able to implement an indirect simple shooting method with nutopy package on an academic optimal control problem for which the optimal control (that is the solution of the problem) is smooth. We assume you have some basic knowledge on optimal control theory. To achieve the goal, you can start by simply read the text and the code at the end of the notebook. For a deeper understanding and more details, you can watch the videos embedded in the notebook. All these videos explain the material you can find here: pdf file.

Contents

  • I) Statement of the optimal control problem and necessary conditions of optimality
    • a) Definition of the optimal control problem - Video
    • b) Application of the Pontryagin Maximum Principle - Video
    • c) The hidden true Hamiltonian - Video
    • d) Illustration of the resolution of the necessary conditions of optimality - Video
  • II) Examples and boundary value problems
    • a) Simple 1D example - Video
    • b) Calculus of variations - Video
    • c) An energy min navigation problem
  • III) Indirect simple shooting
    • a) The shooting equation - Video
    • b) The iteration of the Newton solver and the Jacobian of the shooting function - Video
    • c) A word on the Lagrange multiplier - Video
  • IV) Numerical resolution of the shooting equations with the nutopy package
    • a) Simple 1D example
    • b) Calculus of variations
    • c) An energy min navigation problem - exercice

I) Statement of the optimal control problem and necessary conditions of optimality

a) Definition of the optimal control problem

We consider the following smooth (all the data are at least C^1) Optimal Control Problem (OCP) in Lagrange form, with fixed initial condition and final time:

    \left\{ 
    \begin{array}{l}
        \displaystyle J(u)  := \displaystyle \int_0^{t_f} L(x(t),u(t)) \, \mathrm{d}t \longrightarrow \min \\[1.0em]
        \dot{x}(t) =  f(x(t),u(t)), \quad  u(t) \in U, \quad t \in [0, t_f] \text{ a.e.}, \\[1.0em]
        x(0) = x_0 , \quad c(x(t_f)) = 0_{\mathrm{R}^k},
    \end{array}
    \right. 

with U \subset \mathrm{R}^m an arbitrary control set and with c a smooth application such that its Jacobian c'(x) (or J_c(x)) is of full rank for any x satisfying the constraint c(x)=0. The solution u belongs to the set of control laws L^\infty([0, t_f], \mathrm{R}^m).