In medical image datasets, discrete labels are often used to describe a continuous spectrum of conditions, making unsupervised image stratification a challenging task. In this work, we propose VAESim, an architecture for image stratification based on a conditional variational autoencoder. VAESim learns a set of prototypical vectors during training, each associated with a cluster in a continuous latent space. We perform a soft assignment of each data sample to the clusters and reconstruct the sample based on a similarity measure between the sample embedding and the prototypical vectors. to update the prototypical embeddings, we use an exponential moving average of the most similar representations between actual prototypes and samples in the batch size. We test our approach on the MNIST handwritten digit dataset and the pneumoniaMNIST medical benchmark dataset, where we show that our method outperforms baselines in terms of kNN accuracy (up to +15% improvement in performance) and performs at par with classification models trained in a fully supervised way. our model also outperforms current end-to-end models for unsupervised stratification.

Ferrante, M., Boccato, T., Spasov, S., Duggento, A., Toschi, N. (2023). VAESim: A probabilistic approach for self-supervised prototype discovery. IMAGE AND VISION COMPUTING, 137, 104746 [10.1016/j.imavis.2023.104746].

VAESim: A probabilistic approach for self-supervised prototype discovery

Duggento, A;Toschi, N
2023-01-01

Abstract

In medical image datasets, discrete labels are often used to describe a continuous spectrum of conditions, making unsupervised image stratification a challenging task. In this work, we propose VAESim, an architecture for image stratification based on a conditional variational autoencoder. VAESim learns a set of prototypical vectors during training, each associated with a cluster in a continuous latent space. We perform a soft assignment of each data sample to the clusters and reconstruct the sample based on a similarity measure between the sample embedding and the prototypical vectors. to update the prototypical embeddings, we use an exponential moving average of the most similar representations between actual prototypes and samples in the batch size. We test our approach on the MNIST handwritten digit dataset and the pneumoniaMNIST medical benchmark dataset, where we show that our method outperforms baselines in terms of kNN accuracy (up to +15% improvement in performance) and performs at par with classification models trained in a fully supervised way. our model also outperforms current end-to-end models for unsupervised stratification.
2023
Pubblicato
Rilevanza internazionale
Articolo
Esperti anonimi
Settore FIS/07
English
Deep clustering
Medical imaging
Variational autoencoders
Prototypes discovery
Ferrante, M., Boccato, T., Spasov, S., Duggento, A., Toschi, N. (2023). VAESim: A probabilistic approach for self-supervised prototype discovery. IMAGE AND VISION COMPUTING, 137, 104746 [10.1016/j.imavis.2023.104746].
Ferrante, M; Boccato, T; Spasov, S; Duggento, A; Toschi, N
Articolo su rivista
File in questo prodotto:
File Dimensione Formato  
VAESIM.pdf

accesso aperto

Tipologia: Versione Editoriale (PDF)
Licenza: Creative commons
Dimensione 4.93 MB
Formato Adobe PDF
4.93 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2108/340863
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact