2 datasets found

Filter Results
  • DINOv2

    The dataset used in the paper is DINOv2, a vision foundation model trained on a large-scale dataset.
  • CLIP

    The CLIP model and its variants are becoming the de facto backbone in many applications. However, training a CLIP model from hundreds of millions of image-text pairs can be...