Data Stream Processing (DSP) has emerged as a key enabler to develop pervasive services that require to process data in a near real-time fashion. DSP applications keep up with the high volume of produced data by scaling their execution on multiple computing nodes, so as to process the incoming data flow in parallel. Workloads variability requires to elastically adapt the application parallelism at run-time in order to avoid over-provisioning. Elasticity policies for DSP have been widely investigated, but mostly under the simplifying assumption of homogeneous infrastructures. The resulting solutions do not capture the richness and inherent complexity of modern infrastructures, where heterogeneous computing resources are available on-demand. In this paper, we formulate the problem of controlling elasticity on heterogeneous resources as a Markov Decision Process (MDP). The resulting MDP is not easily solved by traditional techniques due to state space explosion, and thus we show how linear Function Approximation and Tile Coding can be used to efficiently compute elasticity policies at run-time. In order to deal with parameters uncertainty, we integrate the proposed approach with Reinforcement Learning algorithms. Our numerical evaluation shows the efficacy of the presented solutions compared to standard methods in terms of accuracy and convergence speed.

Russo Russo, G., Cardellini, V., Lo Presti, F. (2019). Reinforcement Learning Based Policies for Elastic Stream Processing on Heterogeneous Resources. In DEBS '19 Proceedings of the 13th ACM International Conference on Distributed and Event-based Systems (pp.31-42). New York : ACM [10.1145/3328905.3329506].

Reinforcement Learning Based Policies for Elastic Stream Processing on Heterogeneous Resources

Russo Russo, Gabriele;Cardellini, Valeria;Lo Presti, Francesco
2019-06-01

Abstract

Data Stream Processing (DSP) has emerged as a key enabler to develop pervasive services that require to process data in a near real-time fashion. DSP applications keep up with the high volume of produced data by scaling their execution on multiple computing nodes, so as to process the incoming data flow in parallel. Workloads variability requires to elastically adapt the application parallelism at run-time in order to avoid over-provisioning. Elasticity policies for DSP have been widely investigated, but mostly under the simplifying assumption of homogeneous infrastructures. The resulting solutions do not capture the richness and inherent complexity of modern infrastructures, where heterogeneous computing resources are available on-demand. In this paper, we formulate the problem of controlling elasticity on heterogeneous resources as a Markov Decision Process (MDP). The resulting MDP is not easily solved by traditional techniques due to state space explosion, and thus we show how linear Function Approximation and Tile Coding can be used to efficiently compute elasticity policies at run-time. In order to deal with parameters uncertainty, we integrate the proposed approach with Reinforcement Learning algorithms. Our numerical evaluation shows the efficacy of the presented solutions compared to standard methods in terms of accuracy and convergence speed.
13th ACM International Conference on Distributed and Event-based Systems (DEBS 2019)
Darmstadt, Germany
2019
Rilevanza internazionale
contributo
giu-2019
Settore ING-INF/05 - SISTEMI DI ELABORAZIONE DELLE INFORMAZIONI
English
https://dl.acm.org/citation.cfm?doid=3328905.3329506
Intervento a convegno
Russo Russo, G., Cardellini, V., Lo Presti, F. (2019). Reinforcement Learning Based Policies for Elastic Stream Processing on Heterogeneous Resources. In DEBS '19 Proceedings of the 13th ACM International Conference on Distributed and Event-based Systems (pp.31-42). New York : ACM [10.1145/3328905.3329506].
Russo Russo, G; Cardellini, V; Lo Presti, F
File in questo prodotto:
File Dimensione Formato  
debs2019.pdf

solo utenti autorizzati

Tipologia: Versione Editoriale (PDF)
Licenza: Copyright dell'editore
Dimensione 1.17 MB
Formato Adobe PDF
1.17 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2108/215313
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 29
  • ???jsp.display-item.citation.isi??? 18
social impact