You're currently viewing an old version of this dataset. To see the current version, click here.
Are Larger Pretrained Language Models Uniformly Better? Comparing Performance at the Instance Level
Data and Resources
-
Original MetadataJSON
The json representation of the dataset with its distributions based on DCAT.
Cite this as
Ruiqi Zhong, Dhruba Ghosh, Dan Klein, Jacob Steinhardt (2024). Dataset: Are Larger Pretrained Language Models Uniformly Better? Comparing Performance at the Instance Level. https://doi.org/10.57702/34bmylpg
DOI retrieved: December 17, 2024
Additional Info
Field | Value |
---|---|
Created | December 17, 2024 |
Last update | December 17, 2024 |
Defined In | https://doi.org/10.48550/arXiv.2105.06020 |
Author | Ruiqi Zhong |
More Authors |
|
Homepage | https://github.com/ruiqi-zhong/acl2021-instance-level |