Thermal convection is ubiquitous in nature as well as in many industrial applications. The identification of effective control strategies to, e.g. suppress or enhance the convective heat exchange under fixed external thermal gradients is an outstanding fundamental and technological issue. In this work, we explore a novel approach, based on a state-of-the-art Reinforcement Learning (RL) algorithm, which is capable of significantly reducing the heat transport in a two-dimensional Rayleigh–Bénard system by applying small temperature fluctuations to the lower boundary of the system. By using numerical simulations, we show that our RL-based control is able to stabilise the conductive regime and bring the onset of convection up to a Rayleigh number $Rac \approx 3 10^4$, whereas state-of-the-art linear controllers have $Rac \approx 10^4$. Additionally, for Ra>3 10^4, our approach outperforms other state-of-the-art control algorithms reducing the heat flux by a factor of about 2.5. In the last part of the manuscript, we address theoretical limits connected to controlling an unstable and chaotic dynamics as the one considered here. We show that controllability is hindered by observability and/or capabilities of actuating actions, which can be quantified in terms of characteristic time delays. When these delays become comparable with the Lyapunov time of the system, control becomes impossible.

Beintema, G., Corbetta, A., Biferale, L., Toschi, F. (2020). Controlling Rayleigh–Bénard convection via reinforcement learning. JOURNAL OF TURBULENCE, 21(9-10), 585-605 [10.1080/14685248.2020.1797059].

Controlling Rayleigh–Bénard convection via reinforcement learning

Biferale, Luca;
2020-07-29

Abstract

Thermal convection is ubiquitous in nature as well as in many industrial applications. The identification of effective control strategies to, e.g. suppress or enhance the convective heat exchange under fixed external thermal gradients is an outstanding fundamental and technological issue. In this work, we explore a novel approach, based on a state-of-the-art Reinforcement Learning (RL) algorithm, which is capable of significantly reducing the heat transport in a two-dimensional Rayleigh–Bénard system by applying small temperature fluctuations to the lower boundary of the system. By using numerical simulations, we show that our RL-based control is able to stabilise the conductive regime and bring the onset of convection up to a Rayleigh number $Rac \approx 3 10^4$, whereas state-of-the-art linear controllers have $Rac \approx 10^4$. Additionally, for Ra>3 10^4, our approach outperforms other state-of-the-art control algorithms reducing the heat flux by a factor of about 2.5. In the last part of the manuscript, we address theoretical limits connected to controlling an unstable and chaotic dynamics as the one considered here. We show that controllability is hindered by observability and/or capabilities of actuating actions, which can be quantified in terms of characteristic time delays. When these delays become comparable with the Lyapunov time of the system, control becomes impossible.
29-lug-2020
Pubblicato
Rilevanza internazionale
Articolo
Esperti anonimi
Settore FIS/02
Settore PHYS-02/A - Fisica teorica delle interazioni fondamentali, modelli, metodi matematici e applicazioni
English
Con Impact Factor ISI
Beintema, G., Corbetta, A., Biferale, L., Toschi, F. (2020). Controlling Rayleigh–Bénard convection via reinforcement learning. JOURNAL OF TURBULENCE, 21(9-10), 585-605 [10.1080/14685248.2020.1797059].
Beintema, G; Corbetta, A; Biferale, L; Toschi, F
Articolo su rivista
File in questo prodotto:
File Dimensione Formato  
Controlling Rayleigh B nard convection via reinforcement learning.pdf

accesso aperto

Tipologia: Versione Editoriale (PDF)
Licenza: Creative commons
Dimensione 3.82 MB
Formato Adobe PDF
3.82 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2108/397863
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 83
  • ???jsp.display-item.citation.isi??? 77
social impact