INVESTIGADORES
PEREZ Diana Ines
congresos y reuniones científicas
Título:
Ethical concerns about AI systems: an innovative approach
Autor/es:
PEREZ, DIANA INÉS; LAWLER, DIEGO; PEDACE, KARINA; BALMACEDA, TOMAS
Lugar:
Buenos Aires
Reunión:
Congreso; 17th International Congress of Logic, Methodology and Philosophy of Science and Technology; 2023
Institución organizadora:
International Union of History and Philosophy of Science and Technology
Resumen:
The aim of this paper is to clarify the different dimensions in which AI systems can be ethically evaluated. First, we will introduce a conceptual clarification about the notion of algorithm involved in these systems. We propose two ways of characterizing algorithms, which can be called “algorithm in the narrow sense” and “algorithm in the broad sense”. We argue that algorithms in the narrow sense are not subjects to ethical assessments, but algorithms in the broad sense are. In the second place, we identify, for the latter case, different spheres of activities that can be ethically evaluated: (1) human practices that concern their design, (2) practices that concern our interaction with AI systems once they are present in human societies.Regarding (1), the distinction made about algorithms allows us to locate more adequately the ethical difficulties that these systems pose. In a narrow sense, an algorithm is a mathematical construct that is selected during the design of a system or technological artifact given its past effectiveness in solving tasks similar to those posed by similar problems than the one is now intended to be solved. Examples of algorithms in the narrow sense are deep neural networks, Bayesian networks, Markov chains, the simple Perceptron model, etc. (Mittelstadt et al. 2016; Pasquinelli and Joler 2021).In contrast, an algorithm in the broad sense is a tripartite technological system, comprising training data, a learning algorithm (the algorithm in the narrow sense), and a statistical model as its final output. This system is designed, assembled, and implemented for certain purposes, connected to the resolution of a previously formulated practical problem. The production of an algorithm in the broad sense passes through four key phases: (i) the characterization of both the problem to be solved and the solution sought; (ii) the design, formatting and edition of the data with which it is going to work; (iii) the selection of the algorithm in a narrow sense; and (iv) the training phase of the algorithm based on the available data, and the evaluation of the technological system until it is fine-tuned. In each of these phases the designer face problems and difficulties that had to be solved, decision should be made in order to provide solutions, and each of these phases could be the source of ethical concerns that the AI system shows once finished. (Balmaceda, Pedace , Lawler, Perez, Zeller, 2022).Regarding (2) we hold that, unlike what happens when we interact with other kind of artifacts designed by human beings, many AI systems are treated as “intentional systems” instead of being understood from the design stance (Dennett 1987). As long as we see AI systems as intelligent machines, we tend to understand what they do as if they were agents. Therefore, in our interactions with them we treat them as intentional systems, and we describe their behavior using psychological language (we say that the AI system decides, suggests, answer our questions, make assertions, shows as things, etc.). We will argue that as long as we adopt a double stance while interacting and understanding these systems, additional ethical challenges -different from those generated by other artifacts- emerge.