You're currently viewing an old version of this dataset. To see the current version, click here.

COCOQA

The dataset used in the paper is a set of sequential vision-and-language tasks, where each task consists of an image and a text input.

Data and Resources

This dataset has no data

Cite this as

Yuliang Cai, Jesse Thomason, Mohammad Rostami (2024). Dataset: COCOQA. https://doi.org/10.57702/7y17vs75

Private DOI This DOI is not yet resolvable.
It is available for use in manuscripts, and will be published when the Dataset is made public.

Additional Info

Field Value
Created December 17, 2024
Last update December 17, 2024
Defined In https://doi.org/10.48550/arXiv.2303.14423
Author Yuliang Cai
More Authors
Jesse Thomason
Mohammad Rostami
Homepage https://arxiv.org/abs/1704.03155