INVESTIGADORES
MASTROLEO Ignacio Damian
congresos y reuniones científicas
Título:
Philosophy of Artificial Intelligence for Health
Autor/es:
MASTROLEO, IGNACIO
Lugar:
Freiburbg
Reunión:
Simposio; Symposium Intelligent Oncology: The Potential of AI to Cure Cancer; 2022
Institución organizadora:
Mertelsmann Foundation
Resumen:
Solid analytical work in philosophy has clearly defined the real possibility of existential harm to humanity by general artificial intelligence (AI) and argued that existential risk prevention is a global priority. Such existential risk scenarios of a “super AI” exterminating humankind have a rich and powerful attraction to our imagination. In contrast, here I claim that the main practical aim of the philosophy of AI for health should be the study of non-existential risks of AI health interventions, including AI scientific health claims. I will call this “the duty to study the non-existential risks of AI health interventions”. To support this duty, I will argue that harm to individuals, groups, or populations with AI health interventions is not just a probability that may happen in the future but is instead a reality already happening in the present, with consequences for our understanding of traditional duties and something that can be averted or minimized with simple safeguards. The Surgisphere database scandal during the COVID-19 pandemic illustrates the first point. The AI scientific claims —published in The Lancet and NEJM, based on a fraudulent global database that allegedly used AI/ML to automate processes— made governments and the World Health Organization (WHO) change their health policy with harmful results. If this argument is sound, three traditional duties of health professions gain a new meaning in the philosophy of AI for health. First, the duty to care brings support to the main idea of this presentation that the philosophy of AI for health should prioritize the study of non-existential risks, as we prioritize some patients to emergency care over research in medicine. Second, once we know of non-existential risks, the duty to do no harm justifies implementing appropriate safeguards in the work of health professionals (e.g. medicine, nursing, pharmacy, and public health) and data scientists who are currently developing and using AI health interventions. Third, the duty of justice states that we should redress unfair inequalities related to AI health interventions, such as improving AI education for health professionals and the national government´s capacity to evaluate AI scientific claims —usually from low- and middle-income countries that are at a higher risk of non-existential harms. Finally, I will share some examples of simple appropriate safeguards (e.g., SPIRIT-AI guidelines for clinical trial presentations, support from WHO) and attempt to address general objections to its implementation such as futility, stifling the benefits of health innovation, and trade secret confidentiality