You're currently viewing an old version of this dataset. To see the current version, click here.

LLaVA-1.5

The dataset used in this paper is a multimodal large language model (LLaMA) dataset, specifically LLaVA-1.5, which consists of 7 billion parameters and is used for multimodal tasks such as image captioning and visual question answering.

Data and Resources

This dataset has no data

Cite this as

Jinfeng Wei, Xiaofeng Zhang (2024). Dataset: LLaVA-1.5. https://doi.org/10.57702/f0jpybn2

Private DOI This DOI is not yet resolvable.
It is available for use in manuscripts, and will be published when the Dataset is made public.

Additional Info

Field Value
Created December 2, 2024
Last update December 2, 2024
Defined In https://doi.org/10.1145/3664647.3681076
Author Jinfeng Wei
More Authors
Xiaofeng Zhang
Homepage https://arxiv.org/abs/2305.04790