This paper addresses the problem of sensor fusion in the context of visual localization, namely the combined usage of multiple vision sensors and visual odometry sources to estimate the position and orientation of an autonomous agent moving into an unknown environment. Focusing on the case of a redundant sensor configuration consisting of many data sources that may also inform about their functioning state, the essential characteristics of visual sensors are recalled, and an effective fusion system is developed. Its two-fold goal is to improve the overall estimate by combining the strengths of potentially different sensors, also adding fault resilience, while allowing for a simple implementation that relies on little information. A test setup that represents a realistic configuration of an autonomous robot is described, and the experimental results are presented and discussed.

Bianchi, L., Carnevale, D., Masocco, R., Mattogno, S., Oliva, F., Romanelli, F., et al. (2023). Efficient visual sensor fusion for autonomous agents. In 2023 International Conference on Control, Automation and Diagnosis (ICCAD). New York : IEEE [10.1109/ICCAD57653.2023.10152399].

Efficient visual sensor fusion for autonomous agents

Bianchi L.
Membro del Collaboration Group
;
Carnevale D.
Membro del Collaboration Group
;
Masocco R.
Membro del Collaboration Group
;
Mattogno S.
Membro del Collaboration Group
;
Oliva F.
Membro del Collaboration Group
;
Romanelli F.
Membro del Collaboration Group
;
Tenaglia A.
Membro del Collaboration Group
2023-01-01

Abstract

This paper addresses the problem of sensor fusion in the context of visual localization, namely the combined usage of multiple vision sensors and visual odometry sources to estimate the position and orientation of an autonomous agent moving into an unknown environment. Focusing on the case of a redundant sensor configuration consisting of many data sources that may also inform about their functioning state, the essential characteristics of visual sensors are recalled, and an effective fusion system is developed. Its two-fold goal is to improve the overall estimate by combining the strengths of potentially different sensors, also adding fault resilience, while allowing for a simple implementation that relies on little information. A test setup that represents a realistic configuration of an autonomous robot is described, and the experimental results are presented and discussed.
International Conference on Control, Automation and Diagnosis (ICCAD 2023)
Roma (Italia)
2023
Rilevanza internazionale
2023
Settore ING-IND/09
Settore IIND-06/B - Sistemi per l'energia e l'ambiente
English
Intervento a convegno
Bianchi, L., Carnevale, D., Masocco, R., Mattogno, S., Oliva, F., Romanelli, F., et al. (2023). Efficient visual sensor fusion for autonomous agents. In 2023 International Conference on Control, Automation and Diagnosis (ICCAD). New York : IEEE [10.1109/ICCAD57653.2023.10152399].
Bianchi, L; Carnevale, D; Masocco, R; Mattogno, S; Oliva, F; Romanelli, F; Tenaglia, A
File in questo prodotto:
File Dimensione Formato  
IEEE 3.pdf

solo utenti autorizzati

Tipologia: Versione Editoriale (PDF)
Licenza: Copyright dell'editore
Dimensione 893.9 kB
Formato Adobe PDF
893.9 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2108/368344
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? ND
social impact