SynCoBERT

SynCoBERT: Syntax-guided multi-modal contrastive pre-training for code representation.

Data and Resources

Cite this as

Xin Wang, Yasheng Wang, Fei Mi, Pingyi Zhou, Yao Wan (2024). Dataset: SynCoBERT. https://doi.org/10.57702/0634ztgn

DOI retrieved: December 2, 2024

Additional Info

Field Value
Created December 2, 2024
Last update December 2, 2024
Defined In https://doi.org/10.48550/arXiv.2208.05596
Citation
  • https://doi.org/10.48550/arXiv.2403.16702
Author Xin Wang
More Authors
Yasheng Wang
Fei Mi
Pingyi Zhou
Yao Wan
Homepage https://arxiv.org/abs/2108.04556