This paper proposes a self-organizing fully decentralized solution for the service assembly problem, whose goal is to guarantee a good overall quality for the delivered services, ensuring at the same time fairness among the participating peers. The main features of our solution are: (i) the use of a gossip protocol to support decentralized information dissemination and decision making, and (ii) the use of a reinforcement learning approach to make each peer able to learn from its experience the service selection rule to be followed, thus overcoming the lack of global knowledge. Besides, we explicitly take into account load-dependent quality attributes, which lead to the definition of a service selection rule that drives the system away from overloading conditions that could adversely affect quality and fairness. Simulation experiments show that our solution self-adapts to occurring variations by quickly converging to viable assemblies maintaining the specified quality and fairness objectives.

Caporuscio, M., D'Angelo, M., Grassi, V., Mirandola, R. (2016). Reinforcement learning techniques for decentralized self-adaptive service assembly. In E.B.J. M. Aiello (a cura di), ESOCC 2016, 5th European Conference on Service-Oriented and Cloud Computing (pp. 53-68). Springer-Verlag [10.1007/978-3-319-44482-6_4].

Reinforcement learning techniques for decentralized self-adaptive service assembly

GRASSI, VINCENZO;
2016-01-01

Abstract

This paper proposes a self-organizing fully decentralized solution for the service assembly problem, whose goal is to guarantee a good overall quality for the delivered services, ensuring at the same time fairness among the participating peers. The main features of our solution are: (i) the use of a gossip protocol to support decentralized information dissemination and decision making, and (ii) the use of a reinforcement learning approach to make each peer able to learn from its experience the service selection rule to be followed, thus overcoming the lack of global knowledge. Besides, we explicitly take into account load-dependent quality attributes, which lead to the definition of a service selection rule that drives the system away from overloading conditions that could adversely affect quality and fairness. Simulation experiments show that our solution self-adapts to occurring variations by quickly converging to viable assemblies maintaining the specified quality and fairness objectives.
2016
Settore ING-INF/05 - SISTEMI DI ELABORAZIONE DELLE INFORMAZIONI
English
Rilevanza internazionale
Articolo scientifico in atti di convegno
Service assembly, self-adaptation, decentralized management, reinforcement learning
Caporuscio, M., D'Angelo, M., Grassi, V., Mirandola, R. (2016). Reinforcement learning techniques for decentralized self-adaptive service assembly. In E.B.J. M. Aiello (a cura di), ESOCC 2016, 5th European Conference on Service-Oriented and Cloud Computing (pp. 53-68). Springer-Verlag [10.1007/978-3-319-44482-6_4].
Caporuscio, M; D'Angelo, M; Grassi, V; Mirandola, R
Contributo in libro
File in questo prodotto:
File Dimensione Formato  
ESOCC2016.pdf

solo utenti autorizzati

Licenza: Non specificato
Dimensione 622.09 kB
Formato Adobe PDF
622.09 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2108/173151
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 13
  • ???jsp.display-item.citation.isi??? 9
social impact