Optimal control problems are often solved exploiting the solution of the so-called Hamilton-Jacobi-Bellman (HJB) partial differential equation, which may be, however, hard or impossible to solve in specific examples. Herein we circumvent this issue determining a dynamic solution of the HJB equation, without solving any partial differential equation. The methodology yields a dynamic control law that minimizes a cost functional defined as the sum of the original cost and an additional cost.
Sassano, M., Astolfi, A. (2010). Dynamic solution of the HJB equation and the optimal control of nonlinear systems. ??????? it.cilea.surplus.oa.citation.tipologie.CitationProceedings.prensentedAt ??????? Proceedings of the IEEE Conference on Decision and Control [10.1109/CDC.2010.5716990].
Dynamic solution of the HJB equation and the optimal control of nonlinear systems
Sassano M.;Astolfi A.
2010-01-01
Abstract
Optimal control problems are often solved exploiting the solution of the so-called Hamilton-Jacobi-Bellman (HJB) partial differential equation, which may be, however, hard or impossible to solve in specific examples. Herein we circumvent this issue determining a dynamic solution of the HJB equation, without solving any partial differential equation. The methodology yields a dynamic control law that minimizes a cost functional defined as the sum of the original cost and an additional cost.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.