You're currently viewing an old version of this dataset. To see the current version, click here.

GLUE Benchmark

The GLUE benchmark consists of 9 sentence- or sentence-pair language understanding tasks, selected to cover a range of dataset sizes, text genres, and difficulty levels.

Data and Resources

Cite this as

Lucas Weber, Jaap Jumelet, Paul Michel, Elia Bruni, Dieuwke Hupkes (2024). Dataset: GLUE Benchmark. https://doi.org/10.57702/zfq6kj1s

DOI retrieved: December 2, 2024

Additional Info

Field Value
Created December 2, 2024
Last update December 2, 2024
Defined In https://doi.org/10.48550/arXiv.2305.04989
Citation
  • https://doi.org/10.48550/arXiv.2205.06910
  • https://doi.org/10.48550/arXiv.2208.10806
  • https://doi.org/10.48550/arXiv.2202.04538
  • https://doi.org/10.48550/arXiv.2308.04624
  • https://doi.org/10.48550/arXiv.2305.18239
Author Lucas Weber
More Authors
Jaap Jumelet
Paul Michel
Elia Bruni
Dieuwke Hupkes
Homepage https://glue.cs.cmu.edu/