When Doctors and AI Interact: on Human Responsibility for Artificial Risks
Cargando...
Archivos
Fecha
2022-02
Autores
Profesor/a Guía
Facultad/escuela
Idioma
en
Título de la revista
ISSN de la revista
Título del volumen
Editor
Springer Science and Business Media B.V.
Nombre de Curso
Licencia CC
Attribution 4.0 International (CC BY 4.0)
Licencia CC
https://link-springer-com.recursosbiblioteca.unab.cl/article/10.1007/s13347-022-00506-6#rightslink
Resumen
A discussion concerning whether to conceive Artificial Intelligence (AI) systems as responsible moral entities, also known as “artificial moral agents” (AMAs), has been going on for some time. In this regard, we argue that the notion of “moral agency” is to be attributed only to humans based on their autonomy and sentience, which AI systems lack. We analyze human responsibility in the presence of AI systems in terms of meaningful control and due diligence and argue against fully automated systems in medicine. With this perspective in mind, we focus on the use of AI-based diagnostic systems and shed light on the complex networks of persons, organizations and artifacts that come to be when AI systems are designed, developed, and used in medicine. We then discuss relational criteria of judgment in support of the attribution of responsibility to humans when adverse events are caused or induced by errors in AI systems.
Notas
Indexación: Scopus.
Palabras clave
Artificial Intelligence, Due diligence, Medicine, Moral agency, Principle of confidence, Responsibility
Citación
Philosophy and Technology, Volume 35, Issue 1, March 2022, Article number 11
DOI
10.1007/s13347-022-00506-6