Organs On a Chip (OOCs) represent a sophisticated approach for exploring biological mechanisms and developing therapeutic agents. In conjunction with high-quality time-lapse microscopy (TLM), OOCs allow for the visualization of reconstituted complex biological processes, such as multi-cell-type migration and cell–cell interactions. In this context, increasing the frame rate is desirable to reconstruct accurately cell-interaction dynamics. However, a trade-off between high resolution and carried information content is required to reduce the overall data volume. Moreover, high frame rates increase photobleaching and phototoxicity. As a possible solution for these problems, we report a new hybrid-imaging paradigm based on the integration of OOC/TLMs with a Multi-scale Generative Adversarial Network (GAN) predicting interleaved video frames with the aim to provide high-throughput videos. We tested the performance of the predictive capability of GAN on synthetic videos, as well as on real OOC experiments dealing with tumor–immune cell interactions. The proposed approach offers the possibility to acquire a reduced number of high-quality TLM images without any major loss of information on the phenomena under investigation.
Comes, M.c., Filippi, J., Mencattini, A., Casti, P., Cerrato, G., Sauvat, A., et al. (2021). Multi-scale generative adversarial network for improved evaluation of cell–cell interactions observed in organ-on-chip experiments. NEURAL COMPUTING & APPLICATIONS, 33(8), 3671-3689 [10.1007/s00521-020-05226-6].
Multi-scale generative adversarial network for improved evaluation of cell–cell interactions observed in organ-on-chip experiments
Mencattini A.;Casti P.;Di Natale C.;Martinelli E.
2021-01-01
Abstract
Organs On a Chip (OOCs) represent a sophisticated approach for exploring biological mechanisms and developing therapeutic agents. In conjunction with high-quality time-lapse microscopy (TLM), OOCs allow for the visualization of reconstituted complex biological processes, such as multi-cell-type migration and cell–cell interactions. In this context, increasing the frame rate is desirable to reconstruct accurately cell-interaction dynamics. However, a trade-off between high resolution and carried information content is required to reduce the overall data volume. Moreover, high frame rates increase photobleaching and phototoxicity. As a possible solution for these problems, we report a new hybrid-imaging paradigm based on the integration of OOC/TLMs with a Multi-scale Generative Adversarial Network (GAN) predicting interleaved video frames with the aim to provide high-throughput videos. We tested the performance of the predictive capability of GAN on synthetic videos, as well as on real OOC experiments dealing with tumor–immune cell interactions. The proposed approach offers the possibility to acquire a reduced number of high-quality TLM images without any major loss of information on the phenomena under investigation.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.