You're currently viewing an old version of this dataset. To see the current version, click here.

LLaVA-1.5

The dataset used in this paper is a multimodal large language model (LLaMA) dataset, specifically LLaVA-1.5, which consists of 7 billion parameters and is used for multimodal tasks such as image captioning and visual question answering.

Data and Resources

Cite this as

Jinfeng Wei, Xiaofeng Zhang (2024). Dataset: LLaVA-1.5. https://doi.org/10.57702/f0jpybn2

DOI retrieved: December 2, 2024

Additional Info

Field Value
Created December 2, 2024
Last update December 2, 2024
Defined In https://doi.org/10.1145/3664647.3681076
Author Jinfeng Wei
More Authors
Xiaofeng Zhang
Homepage https://arxiv.org/abs/2305.04790