23 datasets found

Groups: Computer Vision Formats: JSON

Filter Results
  • DisCo-CLIP: A Distributed Contrastive Loss for Memory Efficient CLIP Training

    We propose DisCo-CLIP, a distributed memory-efficient CLIP training approach, to reduce the memory consump- tion of contrastive loss when training contrastive learning models.
  • CLIP

    The CLIP model and its variants are becoming the de facto backbone in many applications. However, training a CLIP model from hundreds of millions of image-text pairs can be...
  • COCO

    Large scale datasets [18, 17, 27, 6] boosted text conditional image generation quality. However, in some domains it could be difficult to make such datasets and usually it could...