You're currently viewing an old version of this dataset. To see the current version, click here.
BERT: Pre-training of deep bidirectional transformers for language understanding
Data and Resources
This dataset has no data
Cite this as
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova (2024). Dataset: BERT: Pre-training of deep bidirectional transformers for language understanding. https://doi.org/10.57702/xvg4jrkz
Private DOI This DOI is not yet resolvable.It is available for use in manuscripts, and will be published when the Dataset is made public.
Additional Info
Field | Value |
---|---|
Created | December 2, 2024 |
Last update | December 2, 2024 |
Defined In | https://doi.org/10.1145/3546577 |
Citation |
|
Author | Jacob Devlin |
More Authors |
|
Homepage | https://arxiv.org/abs/1810.04805 |