INVESTIGADORES
KAMIENKOWSKI Juan Esteban
congresos y reuniones científicas
Título:
Parsing a mental program on visual search in natural scenes: Fixation-related brain signatures of unitary operations and routines"
Autor/es:
KAMIENKOWSKI JE; ISON MJ
Reunión:
Conferencia; Gordon Research Conference on Eye Movements; 2019
Resumen:
Visual search involves a sequence or routine of unitary operations (i.e. fixations) embedded in a larger mental global program. The process can indeed be seen as a program based on a while loop (while the target is not found), a conditional construct (whether the target is matched or not based on specific recognition algorithms) and a decision making step to determine the position of the next searched location based on the accumulated evidence throughout the trial. Recent developments in our ability to co-register brain scalp potentials (EEG) during free eye movements has allowed investigating brain responses related to fixations (fixation-Related Potentials; fERPs), including the identification of sensory and cognitive local EEG components linked to individual fixations. However, the way in which the mental program guiding the search unfolds has not yet been investigated. Here, we introduce a data-driven framework to link oscillations and fERPs with the underlying complex mental programs executed in natural viewing. This framework is supported by our previous EEG and eye tracking co-registration experiments, in which participants searched for a target face in natural crowds scenes. In those experiments, we showed how unitary steps of the program are encoded by specific local target detection signatures, and how the positioning of each unitary operation within the global search program can be pinpointed by changes in the EEG signal amplitude as well as the signal power in different frequency bands. By simultaneously studying brain signatures of unitary operations and those occurring during the sequence of fixations, our study sheds light into how local and global properties are combined in implementing visual routines in natural tasks. Finally, new insights drawn from a novel computational model are related to the previous framework. The model combine top-down integration along fixations based on the bayesian ideal observer model, and bottom-up scene processing based on convolutional neural network models.