The expression of emotion is an inherent aspect in singing, especially in operatic voice. Yet, adverse acoustic conditions, as, e. g., a performance in open-air, or a noisy analog recording, may affect its perception. State-of-the art methods for emotional speech evaluation have been applied to operatic voice, such as perception experiments, acoustic analyses, and machine learning techniques. Still, the extent to which adverse acoustic conditions may impair listeners’ and machines’ identification of emotion in vocal cues has only been investigated in the realm of speech. For our study, 132 listeners evaluated 390 nonsense operatic sung instances of five basic emotions, affected by three noises (brown, pink, and white), each at four Signal-to-Noise Ratios (-1 dB, -0.5 dB, +1 dB, and +3 dB); the performance of state-of-the-art automatic recognition methods was evaluated as well. Our findings show that the three noises affect similarly female and male singers and that listeners’ gender did not play a role. Human perception and automatic classification display similar confusion and recognition patterns: sadness is identified best, fear worst; low aroused emotions display higher confusion.

Emilia, P., Schmitt, M., Batliner, A., Hantke, S., Costantini, G., Scherer, K., et al. (2018). Identifying emotions in opera singing: Implications of adverse acoustic conditions. In Proceedings of the 19th International Society for Music Information Retrieval Conference, ISMIR 2018. International Society for Music Information Retrieval.

Identifying emotions in opera singing: Implications of adverse acoustic conditions

Emilia Parada-Cabaleiro;Giovanni Costantini;
2018-01-01

Abstract

The expression of emotion is an inherent aspect in singing, especially in operatic voice. Yet, adverse acoustic conditions, as, e. g., a performance in open-air, or a noisy analog recording, may affect its perception. State-of-the art methods for emotional speech evaluation have been applied to operatic voice, such as perception experiments, acoustic analyses, and machine learning techniques. Still, the extent to which adverse acoustic conditions may impair listeners’ and machines’ identification of emotion in vocal cues has only been investigated in the realm of speech. For our study, 132 listeners evaluated 390 nonsense operatic sung instances of five basic emotions, affected by three noises (brown, pink, and white), each at four Signal-to-Noise Ratios (-1 dB, -0.5 dB, +1 dB, and +3 dB); the performance of state-of-the-art automatic recognition methods was evaluated as well. Our findings show that the three noises affect similarly female and male singers and that listeners’ gender did not play a role. Human perception and automatic classification display similar confusion and recognition patterns: sadness is identified best, fear worst; low aroused emotions display higher confusion.
ISMIR 2018, 19th International Society for Music Information Retrieval Conference
Paris, France
2018
19
Rilevanza internazionale
contributo
set-2018
2018
Settore ING-IND/31 - ELETTROTECNICA
English
Intervento a convegno
Emilia, P., Schmitt, M., Batliner, A., Hantke, S., Costantini, G., Scherer, K., et al. (2018). Identifying emotions in opera singing: Implications of adverse acoustic conditions. In Proceedings of the 19th International Society for Music Information Retrieval Conference, ISMIR 2018. International Society for Music Information Retrieval.
Emilia, P; Schmitt, M; Batliner, A; Hantke, S; Costantini, G; Scherer, K; Schuller, Bw
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2108/206831
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 6
  • ???jsp.display-item.citation.isi??? ND
social impact