-
MNLI, QQP, and SST-2
The dataset used in this paper consists of three tasks: Multi-Genre Natural Language Inference (MNLI), Quora Question Pairs (QQP), and Stanford Sentiment Treebank (SST-2). -
Are Larger Pretrained Language Models Uniformly Better? Comparing Performance...
Larger language models have higher accu- racy on average, but are they better on ev- ery single instance (datapoint)? -
Multilingual Llama
Multilingual Llama is a multilingual pretrained language model.