COCOQA

The dataset used in the paper is a set of sequential vision-and-language tasks, where each task consists of an image and a text input.

Data and Resources

Cite this as

Yuliang Cai, Jesse Thomason, Mohammad Rostami (2024). Dataset: COCOQA. https://doi.org/10.57702/7y17vs75

DOI retrieved: December 17, 2024

Additional Info

Field Value
Created December 17, 2024
Last update December 17, 2024
Defined In https://doi.org/10.48550/arXiv.2303.14423
Author Yuliang Cai
More Authors
Jesse Thomason
Mohammad Rostami
Homepage https://arxiv.org/abs/1704.03155