You're currently viewing an old version of this dataset. To see the current version, click here.

BLIP-2

BLIP-2: Bootstrapping language-image pre-training with frozen image encoders and large language models

Data and Resources

This dataset has no data

Cite this as

Xuantong Liu, Tianyang Hu, Wenjia Wang, Kenji Kawaguchi, Yuan Yao (2024). Dataset: BLIP-2. https://doi.org/10.57702/f7o0826b

Private DOI This DOI is not yet resolvable.
It is available for use in manuscripts, and will be published when the Dataset is made public.

Additional Info

Field Value
Created December 2, 2024
Last update December 2, 2024
Defined In https://doi.org/10.48550/arXiv.2308.10648
Citation
  • https://doi.org/10.48550/arXiv.2402.16305
Author Xuantong Liu
More Authors
Tianyang Hu
Wenjia Wang
Kenji Kawaguchi
Yuan Yao
Homepage https://arxiv.org/abs/2301.12597