3 Linear time-Optimal Control
3.1 Existence of Time-Optimal Control
Consider the linear system of ODE:
for given matrices
Define next
where
Optimal Control Problem
We are given the starting point
Then
Theorem 3.1: Existence of Time-Optimal Control
Let
Proof: Let
Coose
If necessary, redefine
We assert that
Since
because
According to Theorem 2.10 there in fact exists an optimal bang-bang control.
3.2 The Maximum Principle for Linear Time-Optimal Control
The really interesting practical issue now is understanding how to compute an optimal control
Definition
Define
Since
for some control
Theorem 3.2: Geometry of the set
The set
Proof:
- (convexity) Let
. Then s.t.
Let
and hence
- (Closedness) Assume
for and . We must show . As . As s.t.
According to Alaoglu’s Theorem, there exist a subsequence
Thus
Notation: boundary
If
Recall that
Theorem 3.3: Portryagin Maximum Prnciple for Linear Time-Optimal Control
there exists a nonzero vector
for each time
Interpretation: The significance of this assertion is that if we know
We will see in the next chapter that assertion (M) is a special case of the general Pontryagin Maximum Principle.
Proof:
- We know
. Since is convex, There exist a supporting plane to at ; this means tat for some , we have
- Now
iff s.t.
Also
Since
Define
and therefore
for all controls
- We claim now that the foregoing implies
for almost every time
for
where
Then
This contradicts Step 2 above.
For later reference, we pause here to rewrite the foregoing into different notation; this will turn out to be a special case of the general theory developed later in Chapter 4. First of all, define the Hamiltonian:
Definition: Hamiltonian
Theorem 3.4: Another way to write Pontryagin Maximum Principle for Linear Time-Optimal Control
Let
Then there exists a function
We call (ADJ) the adjoint equations and (M) the maximization principle. The function
Proof:
- Select the vector
as in Theorem 3.3, and consider the system
The solution is
Since
- We know from condition (M) in Theorem 3.3 that
Since
- Finally, we observe that according to the definition of the Hamiltonian
, the dynamical equations for take the form (ODE) and (ADJ), as stated in the Theorem.
3.3 Examples
Example 1: Rocket Railroad Car
We recall this example, introduced in §1.2. We have
for
According to the Pontryagin Maximum Principle, there exists
We will extract the interesting fact that an optimal control
We must compute
and therefore
Then
The Maximum Principle asserts
and this implies that
for the sign function
Therefore the optimal control
Since the optimal control switches at most once, then the control we constructed by a geometric method in §1.3 must have been optimal.
Example 2: Control of a Vibrating Spring
Consider next thesimple dynamics
where we interpret the control as an exterior force acting on an oscillating weight (of unit mass) hanging from a spring. Our goal is to design an optimal exterior forcing
We have
which in vector notation become
for
Using the maximum principle
We employ the Pontryagin Maximum Principle, which asserts that there exists
To extract useful information from (M) we must compute
Therefore
and consequently
So we have
and
whence
According to condition (M), for each time
Therefore
Finding the optimal control
To simplify further, we may assume
We deduce therefore that
Geometric interpretation
Next, we figure out the geometric consequences.
When
, our (ODE) becomes In this case, we can calculate that
Consequently, the motion satisfies
, for some radius , and therefore the trajectory lies on a circle with center , as illustrated. If
, our (ODE) instead becomes In which case
Thus
, for some radius , and motion lies on a circle with center , as illustrated.
In summary, to get to the origin we must switch our control