To find the path that minimizes the time to navigate between two given points in a fluid flow is known as Zermelo's problem. Here, we investigate it by using a Reinforcement Learning (RL) approach for the case of a vessel that has a slip velocity with fixed intensity, Vs, but variable direction and navigating in a 2D turbulent sea. We show that an Actor-Critic RL algorithm is able to find quasioptimal solutions for both time-independent and chaotically evolving flow configurations. For the frozen case, we also compared the results with strategies obtained analytically from continuous Optimal Navigation (ON) protocols. We show that for our application, ON solutions are unstable for the typical duration of the navigation process and are, therefore, not useful in practice. On the other hand, RL solutions are much more robust with respect to small changes in the initial conditions and to external noise, even when V-s is much smaller than the maximum flow velocity. Furthermore, we show how the RL approach is able to take advantage of the flow properties in order to reach the target, especially when the steering speed is small. Published under license by AIP Publishing.
Biferale, L., Bonaccorso, F., Buzzicotti, M., Clark Di Leoni, P., Gustavsson, K. (2019). Zermelo's problem: Optimal point-to-point navigation in 2D turbulent flows using reinforcement learning. CHAOS, 29(10), 103138 [10.1063/1.5120370].
Zermelo's problem: Optimal point-to-point navigation in 2D turbulent flows using reinforcement learning
Biferale L.;Buzzicotti M.;
2019-01-01
Abstract
To find the path that minimizes the time to navigate between two given points in a fluid flow is known as Zermelo's problem. Here, we investigate it by using a Reinforcement Learning (RL) approach for the case of a vessel that has a slip velocity with fixed intensity, Vs, but variable direction and navigating in a 2D turbulent sea. We show that an Actor-Critic RL algorithm is able to find quasioptimal solutions for both time-independent and chaotically evolving flow configurations. For the frozen case, we also compared the results with strategies obtained analytically from continuous Optimal Navigation (ON) protocols. We show that for our application, ON solutions are unstable for the typical duration of the navigation process and are, therefore, not useful in practice. On the other hand, RL solutions are much more robust with respect to small changes in the initial conditions and to external noise, even when V-s is much smaller than the maximum flow velocity. Furthermore, we show how the RL approach is able to take advantage of the flow properties in order to reach the target, especially when the steering speed is small. Published under license by AIP Publishing.File | Dimensione | Formato | |
---|---|---|---|
1907.08591 (2).pdf
accesso aperto
Licenza:
Creative commons
Dimensione
7.59 MB
Formato
Adobe PDF
|
7.59 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.