This paper proposes DQ-RTS, a novel decentralized Multi-Agent Reinforcement Learning algorithm designed to address challenges posed by non-ideal communication and a varying number of agents in distributed environments. DQ-RTS incorporates an optimized communication protocol to mitigate data loss between agents. A comparative analysis between DQ-RTS and its decentralized counterpart Q-RTS, or Q-learning for Real-Time Swarms, demonstrates the superior convergence speed of DQ-RTS, achieving a remarkable speed-up factor ranging from 1.6 to 2.7 in scenarios with non-ideal communication. Moreover, DQ-RTS exhibits robustness by maintaining performance even when the agent population fluctuates, making it well-suited for applications requiring adaptable agent numbers over time. Additionally, extensive experiments conducted on various benchmark tasks validate the scalability and effectiveness of DQ-RTS, further establishing its potential as a practical solution for resilient Multi-Agent Reinforcement Learning in dynamic distributed environments.

Canese, L., Cardarilli, G.c., Di Nunzio, L., Fazzolari, R., Re, M., Spano, S. (2024). Resilient multi-agent RL: introducing DQ-RTS for distributed environments with data loss. SCIENTIFIC REPORTS, 14(1) [10.1038/s41598-023-48767-1].

Resilient multi-agent RL: introducing DQ-RTS for distributed environments with data loss

Canese L.;Cardarilli G. C.;Di Nunzio L.;Fazzolari R.;Re M.;Spano S.
2024-01-01

Abstract

This paper proposes DQ-RTS, a novel decentralized Multi-Agent Reinforcement Learning algorithm designed to address challenges posed by non-ideal communication and a varying number of agents in distributed environments. DQ-RTS incorporates an optimized communication protocol to mitigate data loss between agents. A comparative analysis between DQ-RTS and its decentralized counterpart Q-RTS, or Q-learning for Real-Time Swarms, demonstrates the superior convergence speed of DQ-RTS, achieving a remarkable speed-up factor ranging from 1.6 to 2.7 in scenarios with non-ideal communication. Moreover, DQ-RTS exhibits robustness by maintaining performance even when the agent population fluctuates, making it well-suited for applications requiring adaptable agent numbers over time. Additionally, extensive experiments conducted on various benchmark tasks validate the scalability and effectiveness of DQ-RTS, further establishing its potential as a practical solution for resilient Multi-Agent Reinforcement Learning in dynamic distributed environments.
2024
Pubblicato
Rilevanza internazionale
Articolo
Esperti anonimi
Settore ING-INF/01
English
Canese, L., Cardarilli, G.c., Di Nunzio, L., Fazzolari, R., Re, M., Spano, S. (2024). Resilient multi-agent RL: introducing DQ-RTS for distributed environments with data loss. SCIENTIFIC REPORTS, 14(1) [10.1038/s41598-023-48767-1].
Canese, L; Cardarilli, Gc; Di Nunzio, L; Fazzolari, R; Re, M; Spano, S
Articolo su rivista
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2108/364004
Citazioni
  • ???jsp.display-item.citation.pmc??? 1
  • Scopus 2
  • ???jsp.display-item.citation.isi??? 2
social impact