You're currently viewing an old version of this dataset. To see the current version, click here.

Xlnet: Generalized Autoregressive Pretraining for Language Understanding

The Xlnet is a generalized autoregressive pretraining model for language understanding.

Data and Resources

This dataset has no data

Cite this as

Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, Quoc V Le (2025). Dataset: Xlnet: Generalized Autoregressive Pretraining for Language Understanding. https://doi.org/10.57702/mt69o3yg

Private DOI This DOI is not yet resolvable.
It is available for use in manuscripts, and will be published when the Dataset is made public.

Additional Info

Field Value
Created January 3, 2025
Last update January 3, 2025
Defined In https://doi.org/10.48550/arXiv.2105.03994
Citation
  • https://doi.org/10.48550/arXiv.2006.15020
Author Zhilin Yang
More Authors
Zihang Dai
Yiming Yang
Jaime Carbonell
Ruslan Salakhutdinov
Quoc V Le
Homepage https://arxiv.org/abs/1906.08237