The static solution is thus $(x^*, u^*) = (0, 0)$. This solution may be seen as the static pair $(x, u)$ which minimizes the cost $J(u)$ under
The static solution is thus $(x^*, u^*) = (0, 0)$. This solution may be seen as the static pair $(x, u)$ which minimizes the cost $J(u)$ under
the constraint $u \in [-1, 1]$.
the constraint $u \in [-1, 1]$.
It is well known that this problem is what we call a *turnpike* optimal control problem.
It is well known that this problem is what we call a *turnpike* optimal control problem.
Hence, if the final time $t_f$ is long enough the solution is of the following form:
Hence, if the final time $t_f$ is long enough the solution is of the following form:
starting from $x(0)=1$, reach as fast as possible the static solution, stay at the static solution as long as possible before reaching
starting from $x(0)=1$, reach as fast as possible the static solution, stay at the static solution as long as possible before reaching
the target $x(t_f)=1/2$. In this case, the optimal control would be
the target $x(t_f)=1/2$. In this case, the optimal control would be
$$
$$
u(t) = \left\{
u(t) = \left\{
\begin{array}{lll}
\begin{array}{lll}
-1 & \text{if} & t \in [0, t_1], \\[0.5em]
-1 & \text{if} & t \in [0, t_1], \\[0.5em]
\phantom{-}0 & \text{if} & t \in (t_1, t_2], \\[0.5em]
\phantom{-}0 & \text{if} & t \in (t_1, t_2], \\[0.5em]
+1 & \text{if} & t \in (t_2, t_f],
+1 & \text{if} & t \in (t_2, t_f],
\end{array}
\end{array}
\right.
\right.
$$
$$
with $0 < t_1 < t_2 < t_f$. We say that the control is *Bang-Singular-Bang*. A Bang arc corresponds to $u \in \{-1, 1\}$ while a singular control corresponds to $u \in (-1, 1)$. Since the optimal control law is discontinuous, then to solve this optimal control problem by indirect methods and find the *switching times* $t_1$ and $t_2$, we need to implement what we call a *multiple shooting method*. In the next section we introduce a regularization technique to force the control to be in the set $(-1,1)$ and to be smooth. In this context, we will be able to implement a simple shooting method and determine the structure of the optimal control law. Thanks to the simple shooting method, we will have the structure of the optimal control law together with an approximation of the switching times that we will use as initial guess for the multiple shooting method that we present in the last section.
with $0 < t_1 < t_2 < t_f$. We say that the control is *Bang-Singular-Bang*. A Bang arc corresponds to $u \in \{-1, 1\}$ while a singular control corresponds to $u \in (-1, 1)$. Since the optimal control law is discontinuous, then to solve this optimal control problem by indirect methods and find the *switching times* $t_1$ and $t_2$, we need to implement what we call a *multiple shooting method*. In the next section we introduce a regularization technique to force the control to be in the set $(-1,1)$ and to be smooth. In this context, we will be able to implement a simple shooting method and determine the structure of the optimal control law. Thanks to the simple shooting method, we will have the structure of the optimal control law together with an approximation of the switching times that we will use as initial guess for the multiple shooting method that we present in the last section.
<divclass="alert alert-warning">
<divclass="alert alert-warning">
**Main goal**
**Main goal**
Find the switching times $t_1$ and $t_2$ by multiple shooting.
Find the switching times $t_1$ and $t_2$ by multiple shooting.
</div>
</div>
Steps:
Steps:
1. Regularize the problem and solve the regularized problem by simple shooting.
1. Regularize the problem and solve the regularized problem by simple shooting.
2. Determine the structure of the non-regularized optimal control problem, that is the structure Bang-Singular-Bang, and find a good approximation of the switching times and of the initial co-vector.
2. Determine the structure of the non-regularized optimal control problem, that is the structure Bang-Singular-Bang, and find a good approximation of the switching times and of the initial co-vector.
3. Solve the non-regularized optimal control problem by multiple shooting.
3. Solve the non-regularized optimal control problem by multiple shooting.
**_Remark 1._** See this [page](../lecture/lecture_simple_shooting.ipynb) for a general presentation of the simple shooting method.
**_Remark 1._** See this [page](../lecture/lecture_simple_shooting.ipynb) for a general presentation of the simple shooting method.
**_Remark 2._** In this particular example, the singular control does not depend on the costate $p$ since it is constant. This happens in low dimension. This could be taken into consideration to simplify the definition of the multiple shooting method. However, to stay general, we will not consider this particular property in this notebook.
**_Remark 2._** In this particular example, the singular control does not depend on the costate $p$ since it is constant. This happens in low dimension. This could be taken into consideration to simplify the definition of the multiple shooting method. However, to stay general, we will not consider this particular property in this notebook.
Our goal is to determine the structure of the optimal control problem when $(\varepsilon, t_f) = (0, 2)$. The problem is simpler to solver when $\varepsilon$ is bigger and $t_f$ is smaller. It is also smooth whenever $\varepsilon>0$. Hence, we will start by solving the problem for $(\varepsilon, t_f) = (1, 1)$. In a second step, we will decrease the *penalization term* $\varepsilon$ and in a final step, we will increase the final time $t_f$ to the final value $2$.
Our goal is to determine the structure of the optimal control problem when $(\varepsilon, t_f) = (0, 2)$. The problem is simpler to solver when $\varepsilon$ is bigger and $t_f$ is smaller. It is also smooth whenever $\varepsilon>0$. Hence, we will start by solving the problem for $(\varepsilon, t_f) = (1, 1)$. In a second step, we will decrease the *penalization term* $\varepsilon$ and in a final step, we will increase the final time $t_f$ to the final value $2$.
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
### Preliminaries
### Preliminaries
%% Cell type:code id: tags:
%% Cell type:code id: tags:
``` python
``` python
# import packages
# import packages
importnutopyasnt
importnutopyasnt
importnutopy.toolsastools
importnutopy.toolsastools
importnutopy.ocpasocp
importnutopy.ocpasocp
importnumpyasnp
importnumpyasnp
importmatplotlib.pyplotasplt
importmatplotlib.pyplotasplt
#plt.rcParams['figure.figsize'] = [10, 5]
#plt.rcParams['figure.figsize'] = [10, 5]
plt.rcParams['figure.dpi']=150
plt.rcParams['figure.dpi']=150
```
```
%% Cell type:code id: tags:
%% Cell type:code id: tags:
``` python
``` python
# Finite differences function for scalar functionnal
# Finite differences function for scalar functionnal
tf_init=1.0# With this value the problem is simpler to solver since the trajectory stay
tf_init=1.0# With this value the problem is simpler to solver since the trajectory stay
# less time around the static solution
# less time around the static solution
tf_final=2.0
tf_final=2.0
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
### Maximized Hamiltonian and its derivatives
### Maximized Hamiltonian and its derivatives
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
The pseudo-Hamiltonian is (in the normal case)
The pseudo-Hamiltonian is (in the normal case)
$$
$$
H(x,p,u,\varepsilon) = pu - x^2 + \varepsilon ln(1-u^2).
H(x,p,u,\varepsilon) = pu - x^2 + \varepsilon ln(1-u^2).
$$
$$
Note that we put the parameter $\varepsilon$ into the arguments of the pseudo-Hamiltonian since we will vary it.
Note that we put the parameter $\varepsilon$ into the arguments of the pseudo-Hamiltonian since we will vary it.
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
<divclass="alert alert-info">
<divclass="alert alert-info">
**_Question 1:_**
**_Question 1:_**
Give the maximizing control $u[p, \varepsilon]$, that is the control in feedback form solution of the maximization condition.
Give the maximizing control $u[p, \varepsilon]$, that is the control in feedback form solution of the maximization condition.
</div>
</div>
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
**Answer 1:** To complete here (double-click on the line to complete)
**Answer 1:** To complete here (double-click on the line to complete)
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
<divclass="alert alert-info">
<divclass="alert alert-info">
**_Question 2:_**
**_Question 2:_**
Complete the code of the maximizing control $u[p, \varepsilon]$ and its derivative with respect to $p$, that is $\frac{\partial u}{\partial p}[p, \varepsilon]$.
Complete the code of the maximizing control $u[p, \varepsilon]$ and its derivative with respect to $p$, that is $\frac{\partial u}{\partial p}[p, \varepsilon]$.
</div>
</div>
%% Cell type:code id: tags:
%% Cell type:code id: tags:
``` python
``` python
# ----------------------------
# ----------------------------
# Answer 2 to complete here
# Answer 2 to complete here
# ----------------------------
# ----------------------------
#
#
# Control in feedback form u[p,e] and its partial derivative wrt. p.
# Control in feedback form u[p,e] and its partial derivative wrt. p.
#
#
@tools.vectorize(vvars=(1,))
@tools.vectorize(vvars=(1,))
defufun(p,e):
defufun(p,e):
# u = 0 ### TO COMPLETE
# u = 0 ### TO COMPLETE
u=(-e+np.sqrt(e**2+p**2))/p
u=(-e+np.sqrt(e**2+p**2))/p
returnu
returnu
defdufun(p,e):
defdufun(p,e):
# du = 0 ### TO COMPLETE
# du = 0 ### TO COMPLETE
s=np.sqrt(e**2+p**2)
s=np.sqrt(e**2+p**2)
du=1.0/s+(e-s)/p**2
du=1.0/s+(e-s)/p**2
returndu
returndu
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
We give next the maximized Hamiltonian with its derivatives. This permits us to define the flow of the associated Hamiltonian vector field.
We give next the maximized Hamiltonian with its derivatives. This permits us to define the flow of the associated Hamiltonian vector field.
%% Cell type:code id: tags:
%% Cell type:code id: tags:
``` python
``` python
# Definition of the maximized Hamiltonian and its derivatives
# Definition of the maximized Hamiltonian and its derivatives
# The second derivative d2hfun is computed by finite differences for a part
# The second derivative d2hfun is computed by finite differences for a part
where $z_2 := (x_2, p_2) = z(t_2, t_1, x_1, p_1, u_0)$, $z_1 := (x_1, p_1) = z(t_1, t_0, x_0, p_0, u_-)$ and where z(t, s, a, b, u) is the solution at time $t$ of the Hamiltonian system associated to the control u starting at time $s$ at the initial condition $z(s) = (a,b)$.
where $z_2 := (x_2, p_2) = z(t_2, t_1, x_1, p_1, u_0)$, $z_1 := (x_1, p_1) = z(t_1, t_0, x_0, p_0, u_-)$ and where z(t, s, a, b, u) is the solution at time $t$ of the Hamiltonian system associated to the control u starting at time $s$ at the initial condition $z(s) = (a,b)$.
We have introduced the notation $u_-$ for $u\equiv -1$, $u_0$ for $u\equiv 0$ and $u_+$ for $u\equiv +1$.
We have introduced the notation $u_-$ for $u\equiv -1$, $u_0$ for $u\equiv 0$ and $u_+$ for $u\equiv +1$.
**_Remark:_** We know that $(x_2, p_2)=(0,0)$.
**_Remark:_** We know that $(x_2, p_2)=(0,0)$.
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
<divclass="alert alert-info">
<divclass="alert alert-info">
**_Question 5:_**
**_Question 5:_**
Complete the code of the multiple shooting function.
Complete the code of the multiple shooting function.
</div>
</div>
%% Cell type:code id: tags:
%% Cell type:code id: tags:
``` python
``` python
# ----------------------------
# ----------------------------
# Answer 5 to complete here
# Answer 5 to complete here
# ----------------------------
# ----------------------------
#
#
# Multiple shooting function
# Multiple shooting function
#
#
tf=tf_final# we set the final time to the value tf_final
tf=tf_final# we set the final time to the value tf_final
defshoot_multiple(y):
defshoot_multiple(y):
p0=y[0]
p0=y[0]
t1=y[1]
t1=y[1]
t2=y[2]
t2=y[2]
# s = np.zeros([3]) ### TO COMPLETE: use fminus, fsin, fplus, t0, t1, t2, tf, x0, xf_target
# s = np.zeros([3]) ### TO COMPLETE: use fminus, fsin, fplus, t0, t1, t2, tf, x0, xf_target
x1,p1=fminus(t0,x0,p0,t1)# on [ 0, t1]
x1,p1=fminus(t0,x0,p0,t1)# on [ 0, t1]
x2,p2=(np.array([0.0]),np.array([0.0]))# fsing(t1, x1, p1, t2) # on [t1, t2]
x2,p2=(np.array([0.0]),np.array([0.0]))# fsing(t1, x1, p1, t2) # on [t1, t2]
xf,pf=fplus(t2,x2,p2,tf)# on [t2, tf]
xf,pf=fplus(t2,x2,p2,tf)# on [t2, tf]
s=np.zeros([3])
s=np.zeros([3])
s[0]=x1
s[0]=x1
s[1]=p1
s[1]=p1
s[2]=xf-xf_target# x(tf, x0, p0) - xf_target
s[2]=xf-xf_target# x(tf, x0, p0) - xf_target
returns
returns
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
### Resolution of the shooting function
### Resolution of the shooting function
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
<divclass="alert alert-info">
<divclass="alert alert-info">
**_Question 6:_**
**_Question 6:_**
Give initial guesses for the times $t_1$ and $t_2$ according to the solution of the regularized problem.
Give initial guesses for the times $t_1$ and $t_2$ according to the solution of the regularized problem.
The static solution is thus $(x^*, u^*) = (0, 0)$. This solution may be seen as the static pair $(x, u)$ which minimizes the cost $J(u)$ under
The static solution is thus $(x^*, u^*) = (0, 0)$. This solution may be seen as the static pair $(x, u)$ which minimizes the cost $J(u)$ under
the constraint $u \in [-1, 1]$.
the constraint $u \in [-1, 1]$.
It is well known that this problem is what we call a *turnpike* optimal control problem.
It is well known that this problem is what we call a *turnpike* optimal control problem.
Hence, if the final time $t_f$ is long enough the solution is of the following form:
Hence, if the final time $t_f$ is long enough the solution is of the following form:
starting from $x(0)=1$, reach as fast as possible the static solution, stay at the static solution as long as possible before reaching
starting from $x(0)=1$, reach as fast as possible the static solution, stay at the static solution as long as possible before reaching
the target $x(t_f)=1/2$. In this case, the optimal control would be
the target $x(t_f)=1/2$. In this case, the optimal control would be
$$
$$
u(t) = \left\{
u(t) = \left\{
\begin{array}{lll}
\begin{array}{lll}
-1 & \text{if} & t \in [0, t_1], \\[0.5em]
-1 & \text{if} & t \in [0, t_1], \\[0.5em]
\phantom{-}0 & \text{if} & t \in (t_1, t_2], \\[0.5em]
\phantom{-}0 & \text{if} & t \in (t_1, t_2], \\[0.5em]
+1 & \text{if} & t \in (t_2, t_f],
+1 & \text{if} & t \in (t_2, t_f],
\end{array}
\end{array}
\right.
\right.
$$
$$
with $0 < t_1 < t_2 < t_f$. We say that the control is *Bang-Singular-Bang*. A Bang arc corresponds to $u \in \{-1, 1\}$ while a singular control corresponds to $u \in (-1, 1)$. Since the optimal control law is discontinuous, then to solve this optimal control problem by indirect methods and find the *switching times* $t_1$ and $t_2$, we need to implement what we call a *multiple shooting method*. In the next section we introduce a regularization technique to force the control to be in the set $(-1,1)$ and to be smooth. In this context, we will be able to implement a simple shooting method and determine the structure of the optimal control law. Thanks to the simple shooting method, we will have the structure of the optimal control law together with an approximation of the switching times that we will use as initial guess for the multiple shooting method that we present in the last section.
with $0 < t_1 < t_2 < t_f$. We say that the control is *Bang-Singular-Bang*. A Bang arc corresponds to $u \in \{-1, 1\}$ while a singular control corresponds to $u \in (-1, 1)$. Since the optimal control law is discontinuous, then to solve this optimal control problem by indirect methods and find the *switching times* $t_1$ and $t_2$, we need to implement what we call a *multiple shooting method*. In the next section we introduce a regularization technique to force the control to be in the set $(-1,1)$ and to be smooth. In this context, we will be able to implement a simple shooting method and determine the structure of the optimal control law. Thanks to the simple shooting method, we will have the structure of the optimal control law together with an approximation of the switching times that we will use as initial guess for the multiple shooting method that we present in the last section.
<divclass="alert alert-warning">
<divclass="alert alert-warning">
**Main goal**
**Main goal**
Find the switching times $t_1$ and $t_2$ by multiple shooting.
Find the switching times $t_1$ and $t_2$ by multiple shooting.
</div>
</div>
Steps:
Steps:
1. Regularize the problem and solve the regularized problem by simple shooting.
1. Regularize the problem and solve the regularized problem by simple shooting.
2. Determine the structure of the non-regularized optimal control problem, that is the structure Bang-Singular-Bang, and find a good approximation of the switching times and of the initial co-vector.
2. Determine the structure of the non-regularized optimal control problem, that is the structure Bang-Singular-Bang, and find a good approximation of the switching times and of the initial co-vector.
3. Solve the non-regularized optimal control problem by multiple shooting.
3. Solve the non-regularized optimal control problem by multiple shooting.
**_Remark 1._** See this [page](../lecture/lecture_simple_shooting.ipynb) for a general presentation of the simple shooting method.
**_Remark 1._** See this [page](../lecture/lecture_simple_shooting.ipynb) for a general presentation of the simple shooting method.
**_Remark 2._** In this particular example, the singular control does not depend on the costate $p$ since it is constant. This happens in low dimension. This could be taken into consideration to simplify the definition of the multiple shooting method. However, to stay general, we will not consider this particular property in this notebook.
**_Remark 2._** In this particular example, the singular control does not depend on the costate $p$ since it is constant. This happens in low dimension. This could be taken into consideration to simplify the definition of the multiple shooting method. However, to stay general, we will not consider this particular property in this notebook.
Our goal is to determine the structure of the optimal control problem when $(\varepsilon, t_f) = (0, 2)$. The problem is simpler to solver when $\varepsilon$ is bigger and $t_f$ is smaller. It is also smooth whenever $\varepsilon>0$. Hence, we will start by solving the problem for $(\varepsilon, t_f) = (1, 1)$. In a second step, we will decrease the *penalization term* $\varepsilon$ and in a final step, we will increase the final time $t_f$ to the final value $2$.
Our goal is to determine the structure of the optimal control problem when $(\varepsilon, t_f) = (0, 2)$. The problem is simpler to solver when $\varepsilon$ is bigger and $t_f$ is smaller. It is also smooth whenever $\varepsilon>0$. Hence, we will start by solving the problem for $(\varepsilon, t_f) = (1, 1)$. In a second step, we will decrease the *penalization term* $\varepsilon$ and in a final step, we will increase the final time $t_f$ to the final value $2$.
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
### Preliminaries
### Preliminaries
%% Cell type:code id: tags:
%% Cell type:code id: tags:
``` python
``` python
# import packages
# import packages
importnutopyasnt
importnutopyasnt
importnutopy.toolsastools
importnutopy.toolsastools
importnutopy.ocpasocp
importnutopy.ocpasocp
importnumpyasnp
importnumpyasnp
importmatplotlib.pyplotasplt
importmatplotlib.pyplotasplt
#plt.rcParams['figure.figsize'] = [10, 5]
#plt.rcParams['figure.figsize'] = [10, 5]
plt.rcParams['figure.dpi']=150
plt.rcParams['figure.dpi']=150
```
```
%% Cell type:code id: tags:
%% Cell type:code id: tags:
``` python
``` python
# Finite differences function for scalar functionnal
# Finite differences function for scalar functionnal
tf_init=1.0# With this value the problem is simpler to solver since the trajectory stay
tf_init=1.0# With this value the problem is simpler to solver since the trajectory stay
# less time around the static solution
# less time around the static solution
tf_final=2.0
tf_final=2.0
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
### Maximized Hamiltonian and its derivatives
### Maximized Hamiltonian and its derivatives
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
The pseudo-Hamiltonian is (in the normal case)
The pseudo-Hamiltonian is (in the normal case)
$$
$$
H(x,p,u,\varepsilon) = pu - x^2 + \varepsilon ln(1-u^2).
H(x,p,u,\varepsilon) = pu - x^2 + \varepsilon ln(1-u^2).
$$
$$
Note that we put the parameter $\varepsilon$ into the arguments of the pseudo-Hamiltonian since we will vary it.
Note that we put the parameter $\varepsilon$ into the arguments of the pseudo-Hamiltonian since we will vary it.
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
<divclass="alert alert-info">
<divclass="alert alert-info">
**_Question 1:_**
**_Question 1:_**
Give the maximizing control $u[p, \varepsilon]$, that is the control in feedback form solution of the maximization condition.
Give the maximizing control $u[p, \varepsilon]$, that is the control in feedback form solution of the maximization condition.
</div>
</div>
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
**Answer 1:** To complete here (double-click on the line to complete)
**Answer 1:** To complete here (double-click on the line to complete)
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
<divclass="alert alert-info">
<divclass="alert alert-info">
**_Question 2:_**
**_Question 2:_**
Complete the code of the maximizing control $u[p, \varepsilon]$ and its derivative with respect to $p$, that is $\frac{\partial u}{\partial p}[p, \varepsilon]$.
Complete the code of the maximizing control $u[p, \varepsilon]$ and its derivative with respect to $p$, that is $\frac{\partial u}{\partial p}[p, \varepsilon]$.
</div>
</div>
%% Cell type:code id: tags:
%% Cell type:code id: tags:
``` python
``` python
# ----------------------------
# ----------------------------
# Answer 2 to complete here
# Answer 2 to complete here
# ----------------------------
# ----------------------------
#
#
# Control in feedback form u[p,e] and its partial derivative wrt. p.
# Control in feedback form u[p,e] and its partial derivative wrt. p.
#
#
@tools.vectorize(vvars=(1,))
@tools.vectorize(vvars=(1,))
defufun(p,e):
defufun(p,e):
u=0### TO COMPLETE
u=0### TO COMPLETE
returnu
returnu
defdufun(p,e):
defdufun(p,e):
du=0### TO COMPLETE
du=0### TO COMPLETE
returndu
returndu
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
We give next the maximized Hamiltonian with its derivatives. This permits us to define the flow of the associated Hamiltonian vector field.
We give next the maximized Hamiltonian with its derivatives. This permits us to define the flow of the associated Hamiltonian vector field.
%% Cell type:code id: tags:
%% Cell type:code id: tags:
``` python
``` python
# Definition of the maximized Hamiltonian and its derivatives
# Definition of the maximized Hamiltonian and its derivatives
# The second derivative d2hfun is computed by finite differences for a part
# The second derivative d2hfun is computed by finite differences for a part
where $z_2 := (x_2, p_2) = z(t_2, t_1, x_1, p_1, u_0)$, $z_1 := (x_1, p_1) = z(t_1, t_0, x_0, p_0, u_-)$ and where z(t, s, a, b, u) is the solution at time $t$ of the Hamiltonian system associated to the control u starting at time $s$ at the initial condition $z(s) = (a,b)$.
where $z_2 := (x_2, p_2) = z(t_2, t_1, x_1, p_1, u_0)$, $z_1 := (x_1, p_1) = z(t_1, t_0, x_0, p_0, u_-)$ and where z(t, s, a, b, u) is the solution at time $t$ of the Hamiltonian system associated to the control u starting at time $s$ at the initial condition $z(s) = (a,b)$.
We have introduced the notation $u_-$ for $u\equiv -1$, $u_0$ for $u\equiv 0$ and $u_+$ for $u\equiv +1$.
We have introduced the notation $u_-$ for $u\equiv -1$, $u_0$ for $u\equiv 0$ and $u_+$ for $u\equiv +1$.
**_Remark:_** We know that $(x_2, p_2)=(0,0)$.
**_Remark:_** We know that $(x_2, p_2)=(0,0)$.
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
<divclass="alert alert-info">
<divclass="alert alert-info">
**_Question 5:_**
**_Question 5:_**
Complete the code of the multiple shooting function.
Complete the code of the multiple shooting function.
</div>
</div>
%% Cell type:code id: tags:
%% Cell type:code id: tags:
``` python
``` python
# ----------------------------
# ----------------------------
# Answer 5 to complete here
# Answer 5 to complete here
# ----------------------------
# ----------------------------
#
#
# Multiple shooting function
# Multiple shooting function
#
#
tf=tf_final# we set the final time to the value tf_final
tf=tf_final# we set the final time to the value tf_final
defshoot_multiple(y):
defshoot_multiple(y):
p0=y[0]
p0=y[0]
t1=y[1]
t1=y[1]
t2=y[2]
t2=y[2]
s=np.zeros([3])### TO COMPLETE: use fminus, fsin, fplus, t0, t1, t2, tf, x0, xf_target
s=np.zeros([3])### TO COMPLETE: use fminus, fsin, fplus, t0, t1, t2, tf, x0, xf_target
returns
returns
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
### Resolution of the shooting function
### Resolution of the shooting function
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
<divclass="alert alert-info">
<divclass="alert alert-info">
**_Question 6:_**
**_Question 6:_**
Give initial guesses for the times $t_1$ and $t_2$ according to the solution of the regularized problem.
Give initial guesses for the times $t_1$ and $t_2$ according to the solution of the regularized problem.