Menu
Home Explore People Places Arts History Plants & Animals Science Life & Culture Technology
On this page
Singular control

In optimal control, problems of singular control are problems that are difficult to solve because a straightforward application of Pontryagin's minimum principle fails to yield a complete solution. Only a few such problems have been solved, such as Merton's portfolio problem in financial economics or trajectory optimization in aeronautics. A more technical explanation follows.

The most common difficulty in applying Pontryagin's principle arises when the Hamiltonian depends linearly on the control u {\displaystyle u} , i.e., is of the form: H ( u ) = ϕ ( x , λ , t ) u + ⋯ {\displaystyle H(u)=\phi (x,\lambda ,t)u+\cdots } and the control is restricted to being between an upper and a lower bound: a ≤ u ( t ) ≤ b {\displaystyle a\leq u(t)\leq b} . To minimize H ( u ) {\displaystyle H(u)} , we need to make u {\displaystyle u} as big or as small as possible, depending on the sign of ϕ ( x , λ , t ) {\displaystyle \phi (x,\lambda ,t)} , specifically:

u ( t ) = { b , ϕ ( x , λ , t ) < 0 ? , ϕ ( x , λ , t ) = 0 a , ϕ ( x , λ , t ) > 0. {\displaystyle u(t)={\begin{cases}b,&\phi (x,\lambda ,t)<0\\?,&\phi (x,\lambda ,t)=0\\a,&\phi (x,\lambda ,t)>0.\end{cases}}}

If ϕ {\displaystyle \phi } is positive at some times, negative at others and is only zero instantaneously, then the solution is straightforward and is a bang-bang control that switches from b {\displaystyle b} to a {\displaystyle a} at times when ϕ {\displaystyle \phi } switches from negative to positive.

The case when ϕ {\displaystyle \phi } remains at zero for a finite length of time t 1 ≤ t ≤ t 2 {\displaystyle t_{1}\leq t\leq t_{2}} is called the singular control case. Between t 1 {\displaystyle t_{1}} and t 2 {\displaystyle t_{2}} the maximization of the Hamiltonian with respect to u {\displaystyle u} gives us no useful information and the solution in that time interval is going to have to be found from other considerations. One approach is to repeatedly differentiate ∂ H / ∂ u {\displaystyle \partial H/\partial u} with respect to time until the control u again explicitly appears, though this is not guaranteed to happen eventually. One can then set that expression to zero and solve for u. This amounts to saying that between t 1 {\displaystyle t_{1}} and t 2 {\displaystyle t_{2}} the control u {\displaystyle u} is determined by the requirement that the singularity condition continues to hold. The resulting so-called singular arc, if it is optimal, will satisfy the Kelley condition:

( − 1 ) k ∂ ∂ u [ ( d d t ) 2 k H u ] ≥ 0 , k = 0 , 1 , ⋯ {\displaystyle (-1)^{k}{\frac {\partial }{\partial u}}\left[{\left({\frac {d}{dt}}\right)}^{2k}H_{u}\right]\geq 0,\,k=0,1,\cdots }

Others refer to this condition as the generalized Legendre–Clebsch condition.

The term bang-singular control refers to a control that has a bang-bang portion as well as a singular portion.

We don't have any images related to Singular control yet.
We don't have any YouTube videos related to Singular control yet.
We don't have any PDF documents related to Singular control yet.
We don't have any Books related to Singular control yet.
We don't have any archived web articles related to Singular control yet.

References

  1. Zelikin, M. I.; Borisov, V. F. (2005). "Singular Optimal Regimes in Problems of Mathematical Economics". Journal of Mathematical Sciences. 130 (1): 4409–4570 [Theorem 11.1]. doi:10.1007/s10958-005-0350-5. S2CID 122382003. /wiki/Mikhail_Zelikin