Contrastive Visual-Linguistic Pretraining

Contrastive Visual-Linguistic Pretraining (CVLP) is a novel approach to visual-linguistic pretraining that solves the domain bias and noisy label problems encountered with previous visual-linguistic pretraining approaches such as LXMERT and ViLBERT.

Data and Resources

Cite this as

Lei Shi, Kai Shuang, Shijie Geng, Peng Su, Zhengkai Jiang, Peng Gao, Zuohui Fu, Gerard de Melo, Sen Su (2024). Dataset: Contrastive Visual-Linguistic Pretraining. https://doi.org/10.57702/n1or5w7m

DOI retrieved: December 16, 2024

Additional Info

Field Value
Created December 16, 2024
Last update December 16, 2024
Author Lei Shi
More Authors
Kai Shuang
Shijie Geng
Peng Su
Zhengkai Jiang
Peng Gao
Zuohui Fu
Gerard de Melo
Sen Su
Homepage https://github.com/ArcherYunDong/CVLP-