-
Stanford Alpaca: An instruction-following LLaMA model
An instruction-following LLaMA model. -
Alpaca Eval 2
The dataset used in the paper is Alpaca Eval 2, which is an automated metric that measures LLMs' alignment with human preferences. -
Vicuna dataset
Diffusion-based language models are emerg-ing as a promising alternative to autoregressive LMs: they approach the competence of autoregressive LMs while offering nuanced... -
DOLLY dataset
Diffusion-based language models are emerg-ing as a promising alternative to autoregressive LMs: they approach the competence of autoregressive LMs while offering nuanced... -
Mixtral of Experts
The dataset used in the paper for instruction following task -
Llama 2-7B-80k
The dataset used in the paper for instruction following task -
Mistral 7b
The dataset used in the paper for instruction following task -
AlpacaEval 2.0
The dataset used in the paper for instruction following task -
Alpaca-52K
The Alpaca-52K instruction-following dataset is a large-scale dataset containing instruction-following tasks. -
AlpacaFarm
The AlpacaFarm dataset is a large-scale dataset for preference optimization, which consists of a set of instructions and their corresponding responses.