DisCo-CLIP: A Distributed Contrastive Loss for Memory Efficient CLIP Training

We propose DisCo-CLIP, a distributed memory-efficient CLIP training approach, to reduce the memory consump- tion of contrastive loss when training contrastive learning models.

Data and Resources

Cite this as

Yihao Chen, Xianbiao Qi, Jianan Wang, Lei Zhang (2024). Dataset: DisCo-CLIP: A Distributed Contrastive Loss for Memory Efficient CLIP Training. https://doi.org/10.57702/mxicig2x

DOI retrieved: December 2, 2024

Additional Info

Field Value
Created December 2, 2024
Last update December 2, 2024
Defined In https://doi.org/10.48550/arXiv.2304.08480
Author Yihao Chen
More Authors
Xianbiao Qi
Jianan Wang
Lei Zhang
Homepage https://github.com/IDEA-Research/DisCo-CLIP