You're currently viewing an old version of this dataset. To see the current version, click here.

BERT: Pre-training of deep bidirectional transformers for language understanding

This paper proposes BERT, a pre-trained deep bidirectional transformer for language understanding.

Data and Resources

This dataset has no data

Cite this as

Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova (2024). Dataset: BERT: Pre-training of deep bidirectional transformers for language understanding. https://doi.org/10.57702/xvg4jrkz

Private DOI This DOI is not yet resolvable.
It is available for use in manuscripts, and will be published when the Dataset is made public.

Additional Info

Field Value
Created December 2, 2024
Last update December 2, 2024
Defined In https://doi.org/10.1145/3546577
Citation
  • https://doi.org/10.48550/arXiv.2305.12086
  • https://doi.org/10.48550/arXiv.2406.20054
  • https://doi.org/10.48550/arXiv.2105.12544
  • https://doi.org/10.48550/arXiv.2306.05245
  • https://doi.org/10.48550/arXiv.2402.06326
  • https://doi.org/10.48550/arXiv.2305.05393
Author Jacob Devlin
More Authors
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
Homepage https://arxiv.org/abs/1810.04805