BECAS
MAINA HernÁn Javier
artículos
Título:
Automatic multi-modal processing of language and vision to assist people with visual impairments
Autor/es:
HERNÁN MAINA; LUCIANA BENOTTI
Revista:
North American Chapter of the Association for Computational Linguistics Conference: LatinX in AI (LXAI) Research Workshop 2022, Virtual.
Editorial:
Association for Computational Linguistics
Referencias:
Año: 2022
ISSN:
2331-8422
Resumen:
In recent years, the study of the intersection between vision and language modalities, specifically in visual question answering (VQA) models, has gained significant appeal due to its great potential in assistive applications for people with visual disabilities. Despite this, to date, many of the existing VQA models are nor applicable to this goal for at least three reasons. To begin with, they are designed to respond to a single question. That is, they are not able to give feedback to incomplete or incremental questions. Secondly, they only consider a single image which is neither blurred, nor poorly focused, nor poorly framed. All these problems are directly related to the loss of the visual capacity. People with visual disabilities may have trouble interacting with a visual user interface for asking questions and for taking adequate photographs. They also frequently need to read text captured by the images, and most current VQA systems fall short in this task.This work presents a PhD proposal with four lines of research that will be carried out until December 2025. It investigates techniques that increase the robustness of the VQA models. In particular we propose the integration of dialogue history, the analysis of more than one input image, and the incorporation of text recognition capabilities to the models. All of these contributions are motivated to assist people with vision problems with their day-to-day tasks.