The problem of controlling the state of a system, from a given initial condition, during a fixed time interval minimizing at the same time a criterion of optimality is commonly referred to as finite-horizon optimal control problem. It is well-known that one of the standard solutions to the finite-horizon optimal control problem relies upon the solution of the Hamilton-Jacobi-Bellman (HJB) partial differential equation, which may be difficult or impossible to obtain in closed-form. Herein we propose a methodology to avoid the explicit solution of such HJB pde for input-affine nonlinear systems by means of a dynamic extension. This results in a dynamic time-varying state feedback yielding an approximate solution to the finite-horizon optimal control problem.
Sassano, M., Astolfi, A. (2011). Approximate finite-horizon optimal control without PDE's. ??????? it.cilea.surplus.oa.citation.tipologie.CitationProceedings.prensentedAt ??????? Proceedings of the IEEE Conference on Decision and Control [10.1109/CDC.2011.6161137].
Approximate finite-horizon optimal control without PDE's
Sassano M.;Astolfi A.
2011-01-01
Abstract
The problem of controlling the state of a system, from a given initial condition, during a fixed time interval minimizing at the same time a criterion of optimality is commonly referred to as finite-horizon optimal control problem. It is well-known that one of the standard solutions to the finite-horizon optimal control problem relies upon the solution of the Hamilton-Jacobi-Bellman (HJB) partial differential equation, which may be difficult or impossible to obtain in closed-form. Herein we propose a methodology to avoid the explicit solution of such HJB pde for input-affine nonlinear systems by means of a dynamic extension. This results in a dynamic time-varying state feedback yielding an approximate solution to the finite-horizon optimal control problem.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.