You're currently viewing an old version of this dataset. To see the current version, click here.

Laion-400M: Open dataset of CLIP-filtered 400 million image-text pairs

This dataset contains 400 million image-text pairs, which are used to train the CLIP model.

Data and Resources

Cite this as

Christoph Schuhmann, et al. (2024). Dataset: Laion-400M: Open dataset of CLIP-filtered 400 million image-text pairs. https://doi.org/10.57702/r9lzeyzy

DOI retrieved: December 2, 2024

Additional Info

Field Value
Created December 2, 2024
Last update December 2, 2024
Defined In https://doi.org/10.48550/arXiv.2404.12908
Author Christoph Schuhmann
More Authors
et al.
Homepage https://arxiv.org/abs/2106.11196