You're currently viewing an old version of this dataset. To see the current version, click here.

RECLIP: Resource-efficient CLIP by Training with Small Images

A simple method that minimizes computational resource footprint for CLIP (Contrastive Language Image Pretraining).

Data and Resources

This dataset has no data

Cite this as

Runze Li, Dahun Kim, Bir Bhanu, Weicheng Kuo (2024). Dataset: RECLIP: Resource-efficient CLIP by Training with Small Images. https://doi.org/10.57702/zb0xf43h

Private DOI This DOI is not yet resolvable.
It is available for use in manuscripts, and will be published when the Dataset is made public.

Additional Info

Field Value
Created December 2, 2024
Last update December 2, 2024
Author Runze Li
More Authors
Dahun Kim
Bir Bhanu
Weicheng Kuo
Homepage https://openreview.net/forum?id=Ufc5cWhHko