Nonlinear discrete-time optimal control problems are studied over an infinite horizon with the aim of establishing a connection between the solution of the Bellman equation and the trajectories of the Hamiltonian difference dynamics associated with the problem. First, a discrete-time counterpart of the Hamilton–Jacobi–Bellman partial differential equation is introduced and discussed. The latter is then further instrumental for showing that a certain manifold, involving the costate variable and the gradient of the value function, is invariant for the Hamiltonian dynamics, hence recovering a well-known property of continuous-time optimal control problems. This feature is then leveraged to envision an episodic learning strategy based on the notion of invariant manifold.
Sassano, M. (2025). Infinite-horizon optimal control of nonlinear discrete-time systems: HJB pde, Hamiltonian dynamics and invariant manifolds. AUTOMATICA, 179 [10.1016/j.automatica.2025.112441].
Infinite-horizon optimal control of nonlinear discrete-time systems: HJB pde, Hamiltonian dynamics and invariant manifolds
Sassano, Mario
2025-01-01
Abstract
Nonlinear discrete-time optimal control problems are studied over an infinite horizon with the aim of establishing a connection between the solution of the Bellman equation and the trajectories of the Hamiltonian difference dynamics associated with the problem. First, a discrete-time counterpart of the Hamilton–Jacobi–Bellman partial differential equation is introduced and discussed. The latter is then further instrumental for showing that a certain manifold, involving the costate variable and the gradient of the value function, is invariant for the Hamiltonian dynamics, hence recovering a well-known property of continuous-time optimal control problems. This feature is then leveraged to envision an episodic learning strategy based on the notion of invariant manifold.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


