INVESTIGADORES
LAWLER Diego
congresos y reuniones científicas
Título:
Ethical concerns about IA systems: an innovative approach
Autor/es:
PÉREZ DIANA; LAWLER, DIEGO; BALMACEDA TOMÁS; PEDACE KARINA
Lugar:
Buenos Aires
Reunión:
Congreso; 17th CLMPST: Science and Values in an Uncertain World; 2023
Institución organizadora:
International Union of History and Philosophy of Science and Technology (DLMPST/IUHPST)
Resumen:
Ethical concerns about AI systems: an innovative approachAuthors:The aim of this paper is to clarify the different dimensions in which AI systems can beethically evaluated. First, we will introduce a conceptual clarification about the notionof algorithm involved in these systems. We propose two ways of characterizingalgorithms, which can be called “algorithm in the narrow sense” and “algorithm in thebroad sense”. We argue that algorithms in the narrow sense are not subjects to ethicalassessments, but algorithms in the broad sense actually are.In the second place, we identify, for the latter case, different spheres of activities thatcan be ethically evaluated: (1) human practices that concern their design, (2) practicesthat concern (comprise) our interaction with AI systems once they are present inhuman societies.Regarding (1), the distinction made about algorithms allows us to locate moreadequately the ethical difficulties that these systems pose. In a narrow sense, analgorithm is a mathematical construct that is selected during the design of a system ortechnological artifact given its past effectiveness in solving tasks similar to those posedby similar problems than the one is now intended to be solved. Examples of algorithmsin the narrow sense are deep neural networks, Bayesian networks, Markov chains, thesimple Perceptron model, etc. (Mittelstadt et al. 2016; Pasquinelli and Joler 2021).In contrast, an algorithm in the broad sense is a tripartite technological system,comprising training data, a learning algorithm (the algorithm in the narrow sense), anda statistical model as its final output. This system is designed, assembled, andimplemented for certain purposes, connected to the resolution of a previouslyformulated practical problem. The production of an algorithm in the broad sensepasses through four key phases: (i) the characterization of both the problem to besolved and the solution sought; (ii) the design, formatting and edition of the data withwhich it is going to work; (iii) the selection of the algorithm in a narrow sense; and (iv)the training phase of the algorithm based on the available data, and the evaluation ofthe technological system until it is fine-tuned. In each of these phases the designerface problems and difficulties that had to be solved, decision should be made in orderto provide solutions, and each of these phases could be the source of ethical concernsthat the AI system shows once finished. (Balmaceda, Pedace , Lawler, Perez, Zeller,2022).Regarding (2) we hold that, unlike what happens when we interact with other kind ofartifacts designed by human beings, many AI systems are treated as “intentionalsystems” instead of being understood from the design stance (Dennett 1987). As longas we see AI systems as intelligent machines, we tend to understand what they do as ifthey were agents. Therefore, in our interactions with them we treat them asintentional systems, and we describe their behavior using psychological language (wesay that the AI system decides, suggests, answer our questions, make assertions,shows as things, etc.). We will argue that as long as we adopt a double stance whileinteracting and understanding these systems, additional ethical challenges -differentfrom those generated by other artifacts- emerge.ReferencesBalmaceda, T., Pedace, K. , Lawler, D., Pérez, D., Zeller, M. (2022).Pensar la tecnologíacon perspectiva de género. Buenos Aires, CETys.Dennett, D. C. (1987). The intentional stance. MIT press.Pasquinelli, M., Joler, V. El Nooscopio de manifiesto. laFuga, 25, 2021.Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics ofalgorithms: Mapping the debate. Big Data & Society.https://doi.org/10.1177/2053951716679679