Machine Learning (ML) based on supervised and unsupervised learning models has been recently applied in the telecommunication field. However, such techniques rely on application-specific large datasets and the performance deteriorates if the statistics of the inference data changes over time. Reinforcement Learning (RL) is a solution to these issues because it is able to adapt its behavior to the changing statistics of the input data. In this work, we propose the design of an RL Agent able to learn the behavior of a Timing Recovery Loop (TRL) through the Q-Learning algorithm. The Agent is compatible with popular PSK and QAM formats. We validated the RL synchronizer by comparing it to the Mueller and Müller TRL in terms of Modulation Error Ratio (MER) in a noisy channel scenario. The results show a good trade-off in terms of MER performance. The RL based synchronizer loses less than 1 dB of MER with respect to the conventional one but it is able to adapt its behavior to different modulation formats without the need of any tuning for the system parameters

Matta, M., Cardarilli, G.c., Di Nunzio, L., Fazzolari, R., Giardino, D., Nannarelli, A., et al. (2019). A reinforcement learning-based QAM/PSK symbol synchronizer. IEEE ACCESS, 7, 124147-124157 [10.1109/ACCESS.2019.2938390].

A reinforcement learning-based QAM/PSK symbol synchronizer

Cardarilli G. C.;Di Nunzio L.;Fazzolari R.;Re M.;Spano S.
2019-01-01

Abstract

Machine Learning (ML) based on supervised and unsupervised learning models has been recently applied in the telecommunication field. However, such techniques rely on application-specific large datasets and the performance deteriorates if the statistics of the inference data changes over time. Reinforcement Learning (RL) is a solution to these issues because it is able to adapt its behavior to the changing statistics of the input data. In this work, we propose the design of an RL Agent able to learn the behavior of a Timing Recovery Loop (TRL) through the Q-Learning algorithm. The Agent is compatible with popular PSK and QAM formats. We validated the RL synchronizer by comparing it to the Mueller and Müller TRL in terms of Modulation Error Ratio (MER) in a noisy channel scenario. The results show a good trade-off in terms of MER performance. The RL based synchronizer loses less than 1 dB of MER with respect to the conventional one but it is able to adapt its behavior to different modulation formats without the need of any tuning for the system parameters
2019
Pubblicato
Rilevanza internazionale
Articolo
Esperti anonimi
Settore ING-INF/01 - ELETTRONICA
English
Matta, M., Cardarilli, G.c., Di Nunzio, L., Fazzolari, R., Giardino, D., Nannarelli, A., et al. (2019). A reinforcement learning-based QAM/PSK symbol synchronizer. IEEE ACCESS, 7, 124147-124157 [10.1109/ACCESS.2019.2938390].
Matta, M; Cardarilli, Gc; Di Nunzio, L; Fazzolari, R; Giardino, D; Nannarelli, A; Re, M; Spano, S
Articolo su rivista
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2108/265845
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 26
  • ???jsp.display-item.citation.isi??? 13
social impact