You're currently viewing an old version of this dataset. To see the current version, click here.

Hierarchical Question-Image Co-Attention for Visual Question Answering

A number of recent works have proposed attention models for Visual Question Answering (VQA) that generate spatial maps highlighting image regions relevant to answering the question. In this paper, we argue that in addition to modeling “where to look” or visual attention, it is equally important to model “what words to listen to” or question attention.

Data and Resources

This dataset has no data

Cite this as

Jiasen Lu, Jianwei Yang, Dhruv Batra, Devi Parikh (2024). Dataset: Hierarchical Question-Image Co-Attention for Visual Question Answering. https://doi.org/10.57702/uhdmfu1r

Private DOI This DOI is not yet resolvable.
It is available for use in manuscripts, and will be published when the Dataset is made public.

Additional Info

Field Value
Created December 2, 2024
Last update December 2, 2024
Author Jiasen Lu
More Authors
Jianwei Yang
Dhruv Batra
Devi Parikh