This Ph.D. thesis entitled Multi-Sensor Fusion for Autonomous Resilient Perception Exploiting Classical and Deep Learning Techniques addresses the critical challenges associated with perception systems in autonomous applications. Perception plays a vital role in enabling autonomous systems to understand and interpret the environment, making accurate decisions and ensuring the safety and reliability of these systems. However, perception is inherently complex due to various sources of uncertainty, including sensor noise, occlusions, varying lighting conditions, and dynamic environments. To overcome these challenges, this research focuses on the development of robust and resilient perception systems through the fusion of data from multiple sensors. The fusion of information from diverse sensors can provide complementary and redundant information, enhancing the overall perception performance and increasing resilience to sensor failures or limitations. The thesis investigates both classical and deep learning techniques for sensor fusion, leveraging their respective strengths to improve perception accuracy and reliability. The classical techniques explored in this research include probabilistic methods, such as Bayesian filtering and Kalman filtering, which enable the integration of sensor measurements and estimation of the state of the environment. These techniques are enhanced with advanced methodologies to overcome the problems related to sensor degradation and sensor unavailability (e.g. due to breakdowns) and to handle complex real-life scenarios with multiple moving objects and occlusions. Additionally, optimization algorithms are employed to further improve the performance of the sensor fusion process. Moreover, deep learning techniques, specifically convolutional neural networks (CNNs) and recurrent neural networks (RNNs), are investigated for their capability to learn complex representations and patterns from sensor data. Deep learning models are trained on large-scale datasets to recognize and classify objects, detect anomalies, and estimate the environment state. The fusion of deep learning-based outputs with classical techniques allows for a more comprehensive and accurate understanding of the environment. Furthermore, the thesis addresses the issue of resilience in perception systems by incorporating fault detection and recovery mechanisms. Robustness against sensor failures, sensor drift, and adversarial attacks is achieved through the integration of redundancy and outlier rejection techniques. The proposed methods enable the perception system to adapt to changing conditions and maintain reliable performance, even in the presence of sensor abnormalities (e.g. malfunctioning). To be more specific, this Ph.D. thesis focuses on a particular perception problem, namely SLAM (Simultaneous Localization And Mapping), applied to a variety of contexts such as mobile robots and generic agents moving in unknown environments while acquiring measurements coming from different set of sensors (odometry, Ultra Wide Band, Ultra High Frequency - RFID, visual systems). The applications presented in this thesis are related to the following specific cases: • Odometry-UHF RFID system • Visual-UWB system • Visual-UWB system with deep learning approach • Multiple visual system. The effectiveness of the proposed multi-sensor fusion approaches is evaluated through extensive experiments and simulations using real-world datasets and synthetic scenarios. The evaluation encompasses various autonomous applications, including autonomous driving, robotics, and surveillance systems. The results demonstrate significant improvements in perception accuracy, robustness, and resilience compared to single-sensor or naive fusion approaches. In conclusion, this Ph.D. thesis contributes to the field of autonomous perception by presenting novel multi-sensor fusion techniques that exploit both classical and deep learning approaches. The research advances the state-of-the-art in perception systems by improving accuracy, resilience, and adaptability, thus paving the way for more reliable and trustworthy autonomous systems in diverse real-world applications.

Romanelli, F. (2024). Multi-Sensor Fusion for Autonomous Resilient Perception. Exploiting classical and deep learning techniques.

Multi-Sensor Fusion for Autonomous Resilient Perception. Exploiting classical and deep learning techniques

ROMANELLI, FABRIZIO
2024-01-01

Abstract

This Ph.D. thesis entitled Multi-Sensor Fusion for Autonomous Resilient Perception Exploiting Classical and Deep Learning Techniques addresses the critical challenges associated with perception systems in autonomous applications. Perception plays a vital role in enabling autonomous systems to understand and interpret the environment, making accurate decisions and ensuring the safety and reliability of these systems. However, perception is inherently complex due to various sources of uncertainty, including sensor noise, occlusions, varying lighting conditions, and dynamic environments. To overcome these challenges, this research focuses on the development of robust and resilient perception systems through the fusion of data from multiple sensors. The fusion of information from diverse sensors can provide complementary and redundant information, enhancing the overall perception performance and increasing resilience to sensor failures or limitations. The thesis investigates both classical and deep learning techniques for sensor fusion, leveraging their respective strengths to improve perception accuracy and reliability. The classical techniques explored in this research include probabilistic methods, such as Bayesian filtering and Kalman filtering, which enable the integration of sensor measurements and estimation of the state of the environment. These techniques are enhanced with advanced methodologies to overcome the problems related to sensor degradation and sensor unavailability (e.g. due to breakdowns) and to handle complex real-life scenarios with multiple moving objects and occlusions. Additionally, optimization algorithms are employed to further improve the performance of the sensor fusion process. Moreover, deep learning techniques, specifically convolutional neural networks (CNNs) and recurrent neural networks (RNNs), are investigated for their capability to learn complex representations and patterns from sensor data. Deep learning models are trained on large-scale datasets to recognize and classify objects, detect anomalies, and estimate the environment state. The fusion of deep learning-based outputs with classical techniques allows for a more comprehensive and accurate understanding of the environment. Furthermore, the thesis addresses the issue of resilience in perception systems by incorporating fault detection and recovery mechanisms. Robustness against sensor failures, sensor drift, and adversarial attacks is achieved through the integration of redundancy and outlier rejection techniques. The proposed methods enable the perception system to adapt to changing conditions and maintain reliable performance, even in the presence of sensor abnormalities (e.g. malfunctioning). To be more specific, this Ph.D. thesis focuses on a particular perception problem, namely SLAM (Simultaneous Localization And Mapping), applied to a variety of contexts such as mobile robots and generic agents moving in unknown environments while acquiring measurements coming from different set of sensors (odometry, Ultra Wide Band, Ultra High Frequency - RFID, visual systems). The applications presented in this thesis are related to the following specific cases: • Odometry-UHF RFID system • Visual-UWB system • Visual-UWB system with deep learning approach • Multiple visual system. The effectiveness of the proposed multi-sensor fusion approaches is evaluated through extensive experiments and simulations using real-world datasets and synthetic scenarios. The evaluation encompasses various autonomous applications, including autonomous driving, robotics, and surveillance systems. The results demonstrate significant improvements in perception accuracy, robustness, and resilience compared to single-sensor or naive fusion approaches. In conclusion, this Ph.D. thesis contributes to the field of autonomous perception by presenting novel multi-sensor fusion techniques that exploit both classical and deep learning approaches. The research advances the state-of-the-art in perception systems by improving accuracy, resilience, and adaptability, thus paving the way for more reliable and trustworthy autonomous systems in diverse real-world applications.
2024
2023/2024
Computer science, control and geoinformation
36.
Settore IINF-04/A - Automatica
English
Tesi di dottorato
Romanelli, F. (2024). Multi-Sensor Fusion for Autonomous Resilient Perception. Exploiting classical and deep learning techniques.
File in questo prodotto:
File Dimensione Formato  
thesis.pdf

non disponibili

Licenza: Copyright degli autori
Dimensione 8.91 MB
Formato Adobe PDF
8.91 MB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2108/431764
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact