You're currently viewing an old version of this dataset. To see the current version, click here.

Improved Language Modeling by Decoding the Past

Highly regularized LSTMs achieve impressive results on several benchmark datasets in language modeling. We propose a new regularization method based on decoding the last token in the context using the predicted distribution of the next token.

Data and Resources

This dataset has no data

Cite this as

Siddhartha Brahma (2024). Dataset: Improved Language Modeling by Decoding the Past. https://doi.org/10.57702/fskxprc7

Private DOI This DOI is not yet resolvable.
It is available for use in manuscripts, and will be published when the Dataset is made public.

Additional Info

Field Value
Created December 16, 2024
Last update December 16, 2024
Defined In https://doi.org/10.48550/arXiv.1808.05908
Author Siddhartha Brahma
Homepage https://arxiv.org/abs/1809.06858