BERT: Pre-training of deep bidirectional transformers for language understanding

This paper proposes BERT, a pre-trained deep bidirectional transformer for language understanding.

Data and Resources

Cite this as

Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova (2024). Dataset: BERT: Pre-training of deep bidirectional transformers for language understanding. https://doi.org/10.57702/xvg4jrkz

DOI retrieved: December 2, 2024

Additional Info

Field Value
Created December 2, 2024
Last update December 2, 2024
Defined In https://doi.org/10.1145/3546577
Citation
  • https://doi.org/10.48550/arXiv.2305.12086
  • https://doi.org/10.48550/arXiv.2406.20054
  • https://doi.org/10.48550/arXiv.2105.12544
  • https://doi.org/10.48550/arXiv.2306.05245
  • https://doi.org/10.48550/arXiv.2402.06326
  • https://doi.org/10.48550/arXiv.2305.05393
Author Jacob Devlin
More Authors
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
Homepage https://arxiv.org/abs/1810.04805