-
GLOW : Global Weighted Self-Attention Network for Web Search
GLOW is a novel Global Weighted Self-Attention Network for web document search. It leverages global corpus statistics into the deep matching model. -
BERT: Pre-training of deep bidirectional transformers for language understanding
This paper proposes BERT, a pre-trained deep bidirectional transformer for language understanding. -
Text-to-Image Synthesis Dataset
This dataset is used for text-to-image synthesis. -
SST2, SST5, MR, IMDB, Ag news
The dataset used for sentence classification task -
Ego4D Goal-Step
The Ego4D Goal-Step dataset is a large-scale egocentric video dataset that contains 3,000 hours of egocentric video. The dataset is used for action recognition, action... -
String Transformation Tasks
A publicly available data set of 130 real world string transformation tasks from Cropper and Dumancic [2020]. -
GLUE development set
The GLUE development set is a dataset used for evaluating the performance of language models. -
LLaMA-7B and LLaMA-13B models
The dataset used in this paper is not explicitly mentioned, but it is mentioned that the authors used the LLaMA-7B and LLaMA-13B models, and the GLUE development set. -
ChatGPT Dataset
The dataset used in this study consists of a large language model (LLM) enabled platform - ChatGPT. -
DailyDialog
The DailyDialog dataset is a large-scale multi-turn dialogue dataset, consisting of 10,000 conversations with 5 turns each. -
SHP and HH
The dataset used in the paper is SHP and HH. -
Demonstration ITerated Task Optimization (DITTO)
The dataset used in the paper is a collection of email and blog posts from 20 distinct authors, with a focus on few-shot alignment of large language models. -
DEMYSTIFYING CLIP DATA
Contrastive Language-Image Pre-training (CLIP) is an approach that has advanced research and applications in computer vision, fueling modern recognition systems and generative...