Skip to content
Snippets Groups Projects
Commit 099a01e6 authored by Olivier Cots's avatar Olivier Cots
Browse files

Update lecture1.md

parent bcbd28a1
No related branches found
No related tags found
No related merge requests found
...@@ -300,75 +300,3 @@ Computing we get, ...@@ -300,75 +300,3 @@ Computing we get,
```math ```math
u(t) = p(t) = (0, \frac{1}{t_f}), \quad \lambda = \frac{1}{t_f}, \quad x(t) = (t, \frac{t}{t_f}). u(t) = p(t) = (0, \frac{1}{t_f}), \quad \lambda = \frac{1}{t_f}, \quad x(t) = (t, \frac{t}{t_f}).
``` ```
## III) Indirect simple shooting
### a) The shooting equation
>>>
**_Boundary value problem_**
Under our assumptions and thanks to the PMP we have to solve the following boundary value problem with a parameter $\lambda$:
```math
\left\{
\begin{array}{l}
\dot{z}(t) = \vec{H}(z(t),u[z(t)]), \\[0.5em]
x(0) = x_0, \quad c(x(t_f)) = 0, \quad p(t_f) = J_c^T(x(t_f)) \lambda,
\end{array}
\right.
```
with $`z=(x,p)`$, with $`u[z]`$ the smooth control law in feedback form given by the maximization condition,
and where $`\vec{H}(z, u) := (\nabla_p H(z,u), -\nabla_x H(z,u))`$.
>>>
**_Remark._** We can replace $`\dot{z}(t) = \vec{H}(z(t),u[z(t)])`$ by $`\dot{z}(t) = \vec{h}(z(t))`$,
where $`h(z) = H(z, u[z])`$ is the maximized Hamiltonian.
>>>
**_Shooting function_**
To solve the BVP, we define a set of nonlinear equations, the so-called the *shooting equations*. To do so, we introduce the *shooting function* $`S \colon \mathrm{R}^n \times \mathrm{R}^k \to \mathrm{R}^k \times \mathrm{R}^n`$:
```math
S(p_0, \lambda) :=
\begin{pmatrix}
c\left( \pi_x( z(t_f, x_0, p_0) ) \right) \\
\pi_p( z(t_f, x_0, p_0) ) - J_c^T \left( \pi_x( z(t_f, x_0, p_0) ) \right) \lambda
\end{pmatrix}
```
where $`\pi_x(x,p) := x`$ is the canonical projection into the state space, $`\pi_p(x,p) := p`$ is the canonical projection into the co-state space, and where $`z(t_f, x_0, p_0)`$ is the solution at time $`t_f`$ of
$`\dot{z}(t) = \vec{H}(z(t), u[z(t)]) = \vec{h}(z(t))`$, $`z(0) = (x_0, p_0)`$.
>>>
>>>
**_Indirect simple shooting method_**
Solving the BVP is equivalent to find a zero of the shooting function, that is to solve
```math
S(p_0, \lambda) = 0.
```
The *indirect simple shooting method* consists in solving this equation.
>>>
In order to solve the shooting equations, we need to compute the control law $u[\cdot]$, the Hamiltonian system $`\vec{H}`$ (or $`\vec{h}`$), we need an [integrator method](https://en.wikipedia.org/wiki/Numerical_methods_for_ordinary_differential_equations) to compute the *exponential map* $\exp(t \vec{H})$ defined by
```math
\exp(t \vec{H})(x_0, p_0) := z(t, x_0, p_0),
```
and we need a [Newton-like](https://en.wikipedia.org/wiki/Newton%27s_method) solver to solve $`S=0`$.
**_Remark:_** The notation with the exponential mapping is introduced because it is more explicit and permits to show that we need to define the Hamiltonian system and we need to compute the exponential, in order to compute an extremal solution of the PMP.
**_Remark:_**
It is important to understand that if $`(p_0^*, \lambda^*)`$ is solution of $`S=0`$, then the control $`u(\cdot) := u[z(\cdot, x_0, p_0^*)]`$ is a candidate as a solution of the optimal control problem. It is only a candidate and not a solution of the OCP since the PMP gives necessary conditions of optimality. We would have to go further to check whether the control is locally or globally optimal.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment