You're currently viewing an old version of this dataset. To see the current version, click here.

Contrastive Visual-Linguistic Pretraining

Contrastive Visual-Linguistic Pretraining (CVLP) is a novel approach to visual-linguistic pretraining that solves the domain bias and noisy label problems encountered with previous visual-linguistic pretraining approaches such as LXMERT and ViLBERT.

Data and Resources

This dataset has no data

Cite this as

Lei Shi, Kai Shuang, Shijie Geng, Peng Su, Zhengkai Jiang, Peng Gao, Zuohui Fu, Gerard de Melo, Sen Su (2024). Dataset: Contrastive Visual-Linguistic Pretraining. https://doi.org/10.57702/n1or5w7m

Private DOI This DOI is not yet resolvable.
It is available for use in manuscripts, and will be published when the Dataset is made public.

Additional Info

Field Value
Created December 16, 2024
Last update December 16, 2024
Author Lei Shi
More Authors
Kai Shuang
Shijie Geng
Peng Su
Zhengkai Jiang
Peng Gao
Zuohui Fu
Gerard de Melo
Sen Su
Homepage https://github.com/ArcherYunDong/CVLP-