: Heart auscultation is an inexpensive and fundamental technique to effectively to diagnose cardiovascular disease. However, due to relatively high human error rates even when auscultation is performed by an experienced physician, and due to the not universal availability of qualified personnel e.g. in developing countries, a large body of research is attempting to develop automated, computational tools for detecting abnormalities in heart sounds. The large heterogeneity of achievable data quality and devices, the variety o possible heart pathologies, and a generally poor signal-to-noise ratio make this problem extremely challenging. We present an accurate classification strategy for diagnosing heart sounds based on 1) automatic heart phase segmentation, 2) state-of-the art filters drawn from the filed of speech synthesis (mel-frequency cepstral representation), and 3) an ad-hoc multi-branch, multi-instance artificial neural network based on convolutional layers and fully connected neuronal ensembles which separately learns from each heart phase, hence leveraging their different physiological significance. We demonstrate that it is possible to train our architecture to reach very high performances, e.g. an AUC of 0.87 or a sensitivity of 0.97. Our machine-learning-based tool could be employed for heart sound classification, especially as a screening tool in a variety of situations including telemedicine applications.

Duggento, A., Conti, A., Guerrisi, M., Toschi, N. (2021). Classification of real-world pathological phonocardiograms through multi-instance learning. In ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY (pp. 771-774). IEEE [10.1109/EMBC46164.2021.9630705].

Classification of real-world pathological phonocardiograms through multi-instance learning

Duggento A.
;
Conti A.;Guerrisi M.;Toschi N.
2021-01-01

Abstract

: Heart auscultation is an inexpensive and fundamental technique to effectively to diagnose cardiovascular disease. However, due to relatively high human error rates even when auscultation is performed by an experienced physician, and due to the not universal availability of qualified personnel e.g. in developing countries, a large body of research is attempting to develop automated, computational tools for detecting abnormalities in heart sounds. The large heterogeneity of achievable data quality and devices, the variety o possible heart pathologies, and a generally poor signal-to-noise ratio make this problem extremely challenging. We present an accurate classification strategy for diagnosing heart sounds based on 1) automatic heart phase segmentation, 2) state-of-the art filters drawn from the filed of speech synthesis (mel-frequency cepstral representation), and 3) an ad-hoc multi-branch, multi-instance artificial neural network based on convolutional layers and fully connected neuronal ensembles which separately learns from each heart phase, hence leveraging their different physiological significance. We demonstrate that it is possible to train our architecture to reach very high performances, e.g. an AUC of 0.87 or a sensitivity of 0.97. Our machine-learning-based tool could be employed for heart sound classification, especially as a screening tool in a variety of situations including telemedicine applications.
2021
Settore FIS/07 - FISICA APPLICATA (A BENI CULTURALI, AMBIENTALI, BIOLOGIA E MEDICINA)
English
Rilevanza internazionale
Articolo scientifico in atti di convegno
Heart Auscultation
Humans
Machine Learning
Neural Networks, Computer
Signal-To-Noise Ratio
Heart Sounds
Duggento, A., Conti, A., Guerrisi, M., Toschi, N. (2021). Classification of real-world pathological phonocardiograms through multi-instance learning. In ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY (pp. 771-774). IEEE [10.1109/EMBC46164.2021.9630705].
Duggento, A; Conti, A; Guerrisi, M; Toschi, N
Contributo in libro
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2108/291405
Citazioni
  • ???jsp.display-item.citation.pmc??? 0
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact