RECLIP: Resource-efficient CLIP by Training with Small Images

A simple method that minimizes computational resource footprint for CLIP (Contrastive Language Image Pretraining).

Data and Resources

Cite this as

Runze Li, Dahun Kim, Bir Bhanu, Weicheng Kuo (2024). Dataset: RECLIP: Resource-efficient CLIP by Training with Small Images. https://doi.org/10.57702/zb0xf43h

DOI retrieved: December 2, 2024

Additional Info

Field Value
Created December 2, 2024
Last update December 2, 2024
Author Runze Li
More Authors
Dahun Kim
Bir Bhanu
Weicheng Kuo
Homepage https://openreview.net/forum?id=Ufc5cWhHko