This doctoral thesis aims to provide a thorough examination of the critical issue of ensuring the safety of Artificial Intelligence (AI). As these systems become increasingly integrated into various aspects of society, addressing safety concerns has become paramount. The research explores the potential risks associated with AI, including ethical considerations, unintended consequences, and the challenge of designing systems that align with human values. The study also investigates existing frameworks and methodologies for AI safety and proposes a comprehensive framework that encompasses both technical and ethical dimensions including the automotive example of pedestrian detection systems.
Knie, B. (2023). Ensuring the safety of artificial intelligence: a comprehensive analysis and framework.
Ensuring the safety of artificial intelligence: a comprehensive analysis and framework
KNIE, BERNHARD HERMANN ANTOINE
2023-01-01
Abstract
This doctoral thesis aims to provide a thorough examination of the critical issue of ensuring the safety of Artificial Intelligence (AI). As these systems become increasingly integrated into various aspects of society, addressing safety concerns has become paramount. The research explores the potential risks associated with AI, including ethical considerations, unintended consequences, and the challenge of designing systems that align with human values. The study also investigates existing frameworks and methodologies for AI safety and proposes a comprehensive framework that encompasses both technical and ethical dimensions including the automotive example of pedestrian detection systems.| File | Dimensione | Formato | |
|---|---|---|---|
|
Ensuring the Safety of Artificial Intelligence_A Comprehensive Analysis and Framework_KNIE_BERNHARD.pdf
non disponibili
Licenza:
Copyright degli autori
Dimensione
1.54 MB
Formato
Adobe PDF
|
1.54 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


