recently, it has become progressively more evident that classic diagnostic labels are unable to accurately and reliably describe the complexity and variability of several clinical phenotypes. this is particularly true for a broad range of neuropsychiatric illnesses such as depression and anxiety disorders or behavioural phenotypes such as aggression and antisocial personality. patient heterogeneity can be better described and conceptualized by grouping individuals into novel categories, which are based on empirically-derived sections of intersecting continua that span both across and beyond traditional categorical borders. In this context, neuroimaging data carry a wealth of spatiotemporally resolved information about each patient’s brain. However, they are usually heavily collapsed a priori through procedures which are not learned as part of model training, and consequently not optimized for the downstream prediction task. this is due to the fact that every individual participant usually comes with multiple whole-brain 3D imaging modalities often accompanied by a deep genotypic and phenotypic characterization, hence posing formidable computational challenges. In this paper we design and validate a deep learning architecture based on generative models rooted in a modular approach and separable convolutional blocks (which result in a 20-fold decrease in parameter utilization) in order to a) fuse multiple 3D neuroimaging modalities on a voxel-wise level, b) efficiently convert them into informative latent embeddings through heavy dimensionality reduction, c) maintain excellent generalizability and minimal information loss. as proof of concept, we test our architecture on the well characterized human connectome project database (n=974 healthy subjects), demonstrating that our latent embeddings can be clustered into easily separable subject strata which, in turn, map to extremely different phenotypical information (including organic, neuropsychological, personality variables) which was not included in the embedding creation process. the ability to extract meaningful and separable phenotypic information from brain images alone can aid in creating multi-dimensional biomarkers able to chart spatio-temporal trajectories which may correspond to different pathophysiological mechanisms unidentifiable to traditional data analysis approaches. In turn, this may be of aid in predicting disease evolution as well as drug response, hence supporting mechanistic disease understanding and also empowering clinical trials.

Dimitri, G.m., Spasov, S., Duggento, A., Passamonti, L., Lio, P., Toschi, N. (2021). Multimodal image fusion via deep generative models. INFORMATION FUSION [10.1101/2021.03.08.434427].

Multimodal image fusion via deep generative models

Duggento, A.;Toschi, N.
2021-01-01

Abstract

recently, it has become progressively more evident that classic diagnostic labels are unable to accurately and reliably describe the complexity and variability of several clinical phenotypes. this is particularly true for a broad range of neuropsychiatric illnesses such as depression and anxiety disorders or behavioural phenotypes such as aggression and antisocial personality. patient heterogeneity can be better described and conceptualized by grouping individuals into novel categories, which are based on empirically-derived sections of intersecting continua that span both across and beyond traditional categorical borders. In this context, neuroimaging data carry a wealth of spatiotemporally resolved information about each patient’s brain. However, they are usually heavily collapsed a priori through procedures which are not learned as part of model training, and consequently not optimized for the downstream prediction task. this is due to the fact that every individual participant usually comes with multiple whole-brain 3D imaging modalities often accompanied by a deep genotypic and phenotypic characterization, hence posing formidable computational challenges. In this paper we design and validate a deep learning architecture based on generative models rooted in a modular approach and separable convolutional blocks (which result in a 20-fold decrease in parameter utilization) in order to a) fuse multiple 3D neuroimaging modalities on a voxel-wise level, b) efficiently convert them into informative latent embeddings through heavy dimensionality reduction, c) maintain excellent generalizability and minimal information loss. as proof of concept, we test our architecture on the well characterized human connectome project database (n=974 healthy subjects), demonstrating that our latent embeddings can be clustered into easily separable subject strata which, in turn, map to extremely different phenotypical information (including organic, neuropsychological, personality variables) which was not included in the embedding creation process. the ability to extract meaningful and separable phenotypic information from brain images alone can aid in creating multi-dimensional biomarkers able to chart spatio-temporal trajectories which may correspond to different pathophysiological mechanisms unidentifiable to traditional data analysis approaches. In turn, this may be of aid in predicting disease evolution as well as drug response, hence supporting mechanistic disease understanding and also empowering clinical trials.
2021
Online ahead of print
Rilevanza internazionale
Articolo
Esperti anonimi
Settore PHYS-06/A - Fisica per le scienze della vita, l'ambiente e i beni culturali
English
Simeon Spasov research is supported by EPSRC. Luca Passamonti is funded by the Medical Research Council (MRC) grant (MR/P01271X/1) at the University of Cambridge, UK. The GPUs on which this work was performed were generously provided by NVIDIA. Part of this work has been supported by the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 101017727 (EXPERIENCE project).
Dimitri, G.m., Spasov, S., Duggento, A., Passamonti, L., Lio, P., Toschi, N. (2021). Multimodal image fusion via deep generative models. INFORMATION FUSION [10.1101/2021.03.08.434427].
Dimitri, Gm; Spasov, S; Duggento, A; Passamonti, L; Lio, P; Toschi, N
Articolo su rivista
File in questo prodotto:
File Dimensione Formato  
Multimodal and multicontrast image fusion via deep generative models.pdf

solo utenti autorizzati

Tipologia: Versione Editoriale (PDF)
Licenza: Copyright dell'editore
Dimensione 2.41 MB
Formato Adobe PDF
2.41 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2108/404283
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact