-
ANY dataset
ANY dataset combines natural and synthetic data, used to probe polarity via negative polarity items (NPIs) in two pre-trained Transformer-based models (BERT and GPT-2). -
Chinese Medical Text Dataset
The dataset used in this paper is a collection of Chinese medical texts for training and testing BERT-based models. -
Medical Text Dataset
The dataset used in this paper is a collection of medical texts for training and testing BERT-based models. -
BanglaBERT
The BanglaBERT dataset is a multilingual model pre-training dataset for the Bangla language. -
BERTScore: Evaluating text generation with BERT
BERTScore: Evaluating text generation with BERT -
Sentence-BERT
Sentence-BERT: Sentence embeddings using Siamese BERT-networks -
Cross-Lingual Ability of Multilingual BERT
The Cross-Lingual Ability of Multilingual BERT dataset -
BERT: Pre-training of deep bidirectional transformers for language understanding
This paper proposes BERT, a pre-trained deep bidirectional transformer for language understanding.