Improved Language Modeling by Decoding the Past

Highly regularized LSTMs achieve impressive results on several benchmark datasets in language modeling. We propose a new regularization method based on decoding the last token in the context using the predicted distribution of the next token.

Data and Resources

Cite this as

Siddhartha Brahma (2024). Dataset: Improved Language Modeling by Decoding the Past. https://doi.org/10.57702/fskxprc7

DOI retrieved: December 16, 2024

Additional Info

Field Value
Created December 16, 2024
Last update December 16, 2024
Defined In https://doi.org/10.48550/arXiv.1808.05908
Author Siddhartha Brahma
Homepage https://arxiv.org/abs/1809.06858