-
Iterative Teaching by Data Hallucination
The dataset used in the paper for iterative teaching by data hallucination. -
Graph Augmentation for Medical Waveform Data
Graph-based data augmentation method for medical waveform data -
Sample Selection for Data Augmentation in Natural Language Processing
Deep learning-based text classification models need abundant labeled data to obtain competitive performance. To tackle this, multiple researches try to use data augmentation to... -
Pneumoniamnist
Biomedical image analysis, data augmentation, Generative Adversarial Networks (GANs), synthetic images -
Synthetic two-dimensional data and MNIST digits
The dataset used in the experiments with the synthetic two-dimensional data and the MNIST digits. -
MosaicFusion: Diffusion Models as Data Augmenters for Large Vocabulary Instan...
MosaicFusion: A simple yet effective diffusion-based data augmentation approach for large vocabulary instance segmentation. -
Population Based Augmentation
A key challenge in leveraging data augmentation for neural network training is choosing an effective augmentation policy from a large search space of candidate operations. -
MuJoCo Environments with Noise Augmentation
The dataset used in the paper is a set of MuJoCo environments with noise augmentation. -
XOR Mixup: Privacy-Preserving Data Augmentation for One-Shot Federated Learning
User-generated data distributions are often imbalanced across devices and labels, hampering the performance of federated learning (FL). To remedy to this non-independent and... -
Contextual augmentation: Data augmentation by words with paradigmatic relations
Contextual augmentation: Data augmentation by words with paradigmatic relations. -
TreeMix: Compositional Constituency-based Data Augmentation for Natural Langu...
TreeMix is a compositional data augmentation approach for natural language understanding. It leverages constituency parsing tree to decompose sentences into sub-structures and... -
Generative Adversarial Nets
Generative adversarial nets (GANs) are a class of deep learning models that consist of two neural networks: a generator and a discriminator. -
STL-10 dataset
The dataset used in this paper is a collection of images from the STL-10 dataset, preprocessed and used for training and evaluation of the proposed diffusion spectral entropy... -
MixupE: Understanding and Improving Mixup from Directional Derivative Perspec...
Mixup is a popular data augmentation technique for training deep neural networks where additional samples are generated by linearly interpolating pairs of inputs and their labels. -
Latent Conditional Diffusion-based Data Augmentation for Continuous-Time Dyna...
Continuous-Time Dynamic Graph (CTDG) precisely models evolving real-world relationships, drawing heightened interest in dynamic graph learning across academia and industry.... -
Boomerang: Local sampling on image manifolds using diffusion models
The dataset used in the paper is not explicitly described, but it is mentioned that the authors used it for data anonymization, data augmentation, and image perceptual quality... -
Mixup: Beyond empirical risk minimization
The dataset used in the paper is not explicitly described, but it is mentioned that the authors used CIFAR-10, CIFAR-100, ImageNet, CUB-200-2011, and Stanford Dogs datasets.