INVESTIGADORES
RODRIGUEZ Juan Manuel
congresos y reuniones científicas
Título:
Does the performance of Text-to-Image retrieval models generalize beyond captions-as-a-query?
Autor/es:
JUAN MANUEL RODRIGUEZ; NIMA TAVASSOLI; ELIEZER LEVY; GIL LEDERMAN; DIMA SIVOV; MATTEO LISSANDRINI; DAVIDE MOTTIN
Lugar:
Glasgow
Reunión:
Conferencia; European Conference on Information Retrieval 2024; 2024
Resumen:
Text-image retrieval (T2I) refers to the task of recovering all images relevant to a keyword query. Popular datasets for text-image retrieval, such as Flickr30k, VG, or MS-COCO, utilize annotated image captions, e.g., “a man playing with a kid”, as a surrogate for queries. With such surrogate queries, current multi-modal machine learning models, such as CLIP or BLIP, perform remarkably well. The main reason is the descriptive nature of captions, which detail the content of an image. Yet, T2I queries go beyond the mere descriptions in image-caption pairs. Thus, these datasets are ill-suited to test methods on more abstract or conceptual queries, e.g., “family vacations”. In such queries, the image content is implied rather than explicitly described. In this paper, we replicate the T2I results on descriptive queries and generalize them to conceptual queries. To this end, we perform new experiments on a novel T2I benchmark for the task of conceptual query answering, called ConQA. ConQA comprises 30 descriptive and 50 conceptual queries on 43k images with more than 100 manually annotated images per query. Our results on established measures show that both large pretrained models (e.g., CLIP, BLIP, and BLIP2) and small models (e.g., SGRAF and NAAF), perform up to 4× better on descriptive rather than conceptual queries. We also find that the models perform better on queries with more than 6 keywords as in MS-COCO captions.