-
iNaturalist-2019
The dataset used in the paper is iNaturalist-2019, a large-scale image classification dataset. -
MNIST handwritten digit database
The MNIST handwritten digit database is a dataset of 60,000 8x8 grayscale images of handwritten digits. -
CIFAR-10, Imagenette, and ImageNet
The dataset used in the paper is CIFAR-10, Imagenette, and ImageNet. -
ImageNet 2012 validation set
The dataset used in the TextMesh paper, containing 35 text prompts for 3D mesh generation. -
GTSRB dataset
The GTSRB dataset contains over 50,000 images from German traffic signs that belong to 43 different classes. -
ILSVRC2012 (ImageNet 1K)
The dataset used in the paper is ILSVRC2012 (ImageNet 1K), a large-scale image classification dataset. -
ILSVRC2012
The dataset used in the paper is not explicitly described, but it is mentioned that the authors used a subset of the validation dataset used for the ImageNet Large Scale Visual... -
MNIST and CIFAR10
The MNIST and CIFAR10 datasets are used to evaluate the proposed Adversarial Training with Transferable Adversarial Examples (ATTA) method. -
ImageNet-21k and ImageNet-1k
The ImageNet-21k and ImageNet-1k datasets are two large-scale image classification datasets. -
TokenMixup: Efficient Attention-guided Token-level Data Augmentation for Tran...
Mixup is a commonly adopted data augmentation technique for image classification. Recent advances in mixup methods primarily focus on mixing based on saliency. -
Custom Dataset
The authors created a custom dataset for their experiment, consisting of 33,000 images of 320 possible object-image combinations, with 10 possible shapes, 8 possible colors, 2... -
Reg-mixup: Mixup as a regularizer can surprisingly improve accuracy and out d...
Mixup as a regularizer can surprisingly improve accuracy and out distribution robustness. -
Tune it or don’t use it: Benchmarking data-efficient image classification
Tune it or don’t use it: Benchmarking data-efficient image classification. -
A simple data mixing prior for improving self-supervised learning
A simple data mixing prior for improving self-supervised learning. -
i-mix: A domain-agnostic strategy for contrastive representation learning
A simple framework for contrastive learning of visual representations. -
ciFAIR-100 and ciFAIR-10
ciFAIR-100 and ciFAIR-10 are two datasets created to mimic learning with limited labels. -
Infinite Class Mixup
Mixup is a widely adopted strategy for training deep networks, where additional samples are augmented by interpolating inputs and labels of training pairs. Mixup has shown to...