recently, it has become progressively more evident that classic diagnostic labels are unable to accurately and reliably describe the complexity and variability of several clinical phenotypes. This is particularly true for a broad range of neuropsychiatric illnesses such as depression and anxiety disorders or behavioural phenotypes such as aggression and antisocial personality. patient heterogeneity can be better described and conceptualized by grouping individuals into novel categories, which are based on empirically-derived sections of intersecting continua that span both across and beyond traditional categorical borders. In this context, neuroimaging data (i.e. the set of images which result from functional/metabolic (e.g. functional magnetic resonance imaging, functional near-infrared spectroscopy, or positron emission tomography) and structural (e.g. computed tomography, T1-, T2- PD- or diffusion weighted magnetic resonance imaging) carry a wealth of spatiotemporally resolved information about each patient's brain. However, they are usually heavily collapsed a priori through procedures which are not learned as part of model training, and consequently not optimized for the downstream prediction task. This is due to the fact that every individual participant usually comes with multiple whole-brain 3D imaging modalities often accompanied by a deep genotypic and phenotypic characterization, hence posing formidable computational challenges.In this paper we design and validate a deep learning architecture based on generative models rooted in a modular approach and separable convolutional blocks (which result in a 20-fold decrease in parameter utilization) in order to a) fuse multiple 3D neuroimaging modalities on a voxel-wise level, b) efficiently convert them into informative latent embeddings through heavy dimensionality reduction, c) maintain good generalizability and minimal information loss. As proof of concept, we test our architecture on the well characterized Human Connectome Project database (n = 974 healthy subjects), demonstrating that our latent embeddings can be clustered into easily separable subject strata which, in turn, map to different phenotypical information (including organic, neuropsychological, personality variables) which was not included in the embedding creation process.The ability to extract meaningful and separable phenotypic information from brain images alone can aid in creating multi-dimensional biomarkers able to chart spatio-temporal trajectories which may correspond to different pathophysiological mechanisms unidentifiable to traditional data analysis approaches. In turn, this may be of aid in predicting disease evolution as well as drug response, hence supporting mechanistic disease understanding and also empowering clinical trials.

Dimitri, G., Spasov, S., Duggento, A., Passamonti, L., Lio, P., Toschi, N. (2022). Multimodal and multicontrast image fusion via deep generative models. INFORMATION FUSION, 88, 146-160 [10.1016/j.inffus.2022.07.017].

Multimodal and multicontrast image fusion via deep generative models

Duggento, A;Toschi, N
2022-01-01

Abstract

recently, it has become progressively more evident that classic diagnostic labels are unable to accurately and reliably describe the complexity and variability of several clinical phenotypes. This is particularly true for a broad range of neuropsychiatric illnesses such as depression and anxiety disorders or behavioural phenotypes such as aggression and antisocial personality. patient heterogeneity can be better described and conceptualized by grouping individuals into novel categories, which are based on empirically-derived sections of intersecting continua that span both across and beyond traditional categorical borders. In this context, neuroimaging data (i.e. the set of images which result from functional/metabolic (e.g. functional magnetic resonance imaging, functional near-infrared spectroscopy, or positron emission tomography) and structural (e.g. computed tomography, T1-, T2- PD- or diffusion weighted magnetic resonance imaging) carry a wealth of spatiotemporally resolved information about each patient's brain. However, they are usually heavily collapsed a priori through procedures which are not learned as part of model training, and consequently not optimized for the downstream prediction task. This is due to the fact that every individual participant usually comes with multiple whole-brain 3D imaging modalities often accompanied by a deep genotypic and phenotypic characterization, hence posing formidable computational challenges.In this paper we design and validate a deep learning architecture based on generative models rooted in a modular approach and separable convolutional blocks (which result in a 20-fold decrease in parameter utilization) in order to a) fuse multiple 3D neuroimaging modalities on a voxel-wise level, b) efficiently convert them into informative latent embeddings through heavy dimensionality reduction, c) maintain good generalizability and minimal information loss. As proof of concept, we test our architecture on the well characterized Human Connectome Project database (n = 974 healthy subjects), demonstrating that our latent embeddings can be clustered into easily separable subject strata which, in turn, map to different phenotypical information (including organic, neuropsychological, personality variables) which was not included in the embedding creation process.The ability to extract meaningful and separable phenotypic information from brain images alone can aid in creating multi-dimensional biomarkers able to chart spatio-temporal trajectories which may correspond to different pathophysiological mechanisms unidentifiable to traditional data analysis approaches. In turn, this may be of aid in predicting disease evolution as well as drug response, hence supporting mechanistic disease understanding and also empowering clinical trials.
2022
Pubblicato
Rilevanza internazionale
Articolo
Esperti anonimi
Settore FIS/07 - FISICA APPLICATA (A BENI CULTURALI, AMBIENTALI, BIOLOGIA E MEDICINA)
English
Deep autoencoder
Phenotype stratification
Latent embeddings
Precision medicine
Separable Convolutions
Multimodal neuroimaging
Dimitri, G., Spasov, S., Duggento, A., Passamonti, L., Lio, P., Toschi, N. (2022). Multimodal and multicontrast image fusion via deep generative models. INFORMATION FUSION, 88, 146-160 [10.1016/j.inffus.2022.07.017].
Dimitri, G; Spasov, S; Duggento, A; Passamonti, L; Lio, P; Toschi, N
Articolo su rivista
File in questo prodotto:
File Dimensione Formato  
MULTIMODALMULTICONTRAST.pdf

solo utenti autorizzati

Tipologia: Versione Editoriale (PDF)
Licenza: Copyright dell'editore
Dimensione 8.19 MB
Formato Adobe PDF
8.19 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2108/309196
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 21
  • ???jsp.display-item.citation.isi??? 21
social impact