Humans can communicate their emotions by modulating facial expressions or the tone of their voice. Albeit numerous applications exist that enable machines to read facial emotions and recognize the content of verbal messages, methods for speech emotion recognition are still in their infancy. Yet, fast and reliable applications for emotion recognition are the obvious advancement of present 'intelligent personal assistants', and may have countless applications in diagnostics, rehabilitation and research. Taking inspiration from the dynamics of human group decision-making, we devised a novel speech emotion recognition system that applies, for the first time, a semi-supervised prediction model based on consensus. Three tests were carried out to compare this algorithm with traditional approaches. Labeling performances relative to a public database of spontaneous speeches are reported. The novel system appears to be fast, robust and less computationally demanding than traditional methods, allowing for easier implementation in portable voice-analyzers (as used in rehabilitation, research, industry, etc.) and for applications in the research domain (such as real-time pairing of stimuli to participants' emotional state, selective/differential data collection based on emotional content, etc.).

Martinelli, E., Mencattini, A., Daprati, E., DI NATALE, C. (2016). Strength is in numbers: Can concordant artificial listeners improve prediction of emotion from speech?. PLOS ONE, 11(8), e0161752 [10.1371/journal.pone.0161752].

Strength is in numbers: Can concordant artificial listeners improve prediction of emotion from speech?

MARTINELLI, EUGENIO;MENCATTINI, ARIANNA;DAPRATI, ELENA
;
DI NATALE, CORRADO
2016-01-01

Abstract

Humans can communicate their emotions by modulating facial expressions or the tone of their voice. Albeit numerous applications exist that enable machines to read facial emotions and recognize the content of verbal messages, methods for speech emotion recognition are still in their infancy. Yet, fast and reliable applications for emotion recognition are the obvious advancement of present 'intelligent personal assistants', and may have countless applications in diagnostics, rehabilitation and research. Taking inspiration from the dynamics of human group decision-making, we devised a novel speech emotion recognition system that applies, for the first time, a semi-supervised prediction model based on consensus. Three tests were carried out to compare this algorithm with traditional approaches. Labeling performances relative to a public database of spontaneous speeches are reported. The novel system appears to be fast, robust and less computationally demanding than traditional methods, allowing for easier implementation in portable voice-analyzers (as used in rehabilitation, research, industry, etc.) and for applications in the research domain (such as real-time pairing of stimuli to participants' emotional state, selective/differential data collection based on emotional content, etc.).
2016
Pubblicato
Rilevanza internazionale
Articolo
Esperti anonimi
Settore BIO/09 - FISIOLOGIA
Settore M-PSI/02 - PSICOBIOLOGIA E PSICOLOGIA FISIOLOGICA
Settore ING-INF/01 - ELETTRONICA
Settore ING-INF/07 - MISURE ELETTRICHE ED ELETTRONICHE
English
Con Impact Factor ISI
http://dx.doi.org/10.1371/journal.pone.0161752
Martinelli, E., Mencattini, A., Daprati, E., DI NATALE, C. (2016). Strength is in numbers: Can concordant artificial listeners improve prediction of emotion from speech?. PLOS ONE, 11(8), e0161752 [10.1371/journal.pone.0161752].
Martinelli, E; Mencattini, A; Daprati, E; DI NATALE, C
Articolo su rivista
File in questo prodotto:
File Dimensione Formato  
Strength is in numbers_Martinelli.pdf

accesso aperto

Licenza: Creative commons
Dimensione 2.54 MB
Formato Adobe PDF
2.54 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2108/170610
Citazioni
  • ???jsp.display-item.citation.pmc??? 1
  • Scopus 5
  • ???jsp.display-item.citation.isi??? 5
social impact