-
LAION Aesthetics V2 (L-Aes)
The LAION Aesthetics V2 (L-Aes) dataset contains 0.22M image-text pairs. -
LAION-Aesthetics
The LAION-Aesthetics dataset is a large-scale dataset of images used for training and evaluating computer vision models. -
Stacked Wasserstein Autoencoder
The proposed model is built on the theoretical analysis presented in [30,14]. Similar to the ARAE [14], our model provides flexibility in learning an autoencoder from the input... -
BigGAN-Deep
This dataset is used for training and testing the BigGAN model. -
Degeneration-Tuning: Using Scrambled Grid shield Unwanted Concepts from Stabl...
The dataset used in the paper is not explicitly described, but it is mentioned that the authors analyzed the generative mechanism of diffusion models and proposed a novel method... -
ControlVAE: Controllable Variational Autoencoder
The dataset used for language modeling, disentangled representation learning, and image generation. -
DiffusionForensics
The dataset used in the paper for testing the proposed Data-Independent Operator (DIO) framework for generalizable forgery image detection. -
Synthetic MNIST dataset
The dataset used in the paper is a synthetic MNIST dataset generated by forming barycenters constructed with weights sampled uniformly from ∆3. -
Wasserstein Auto-Encoder (WAE)
Wasserstein Auto-Encoder (WAE) is a generative model that uses a combination of convolutional and fully connected layers to learn a probabilistic representation of images. -
LSUN Bedrooms
The dataset used in the paper is the LSUN bedrooms dataset, a large-scale image dataset. -
Toward Joint Image Generation and Compression using Generative Adversarial Ne...
The proposed framework generates JPEG compressed images using generative adversarial networks. -
GAN datasets
The dataset used in this paper is a collection of images generated by different Generative Adversarial Networks (GANs). The dataset is used to evaluate the performance of GANs... -
Object Saliency Noise for Conditional Image Generation with Diffusion Models
Conditional image generation has paved the way for several breakthroughs in image editing, generating stock photos and 3-D object generation. -
DCGAN dataset
Dataset used for training and testing the DCGAN model -
MNIST and FashionMNIST
The MNIST and FashionMNIST datasets are used to test the performance of the proposed generative autoencoders. -
CLIP-GLaSS
The dataset used for the text-to-image task consists of 20 context tokens, to which three fixed tokens have been concatenated, representing the static context "the picture of". -
FFHQ, AFHQ, and LSUN
The proposed method uses the FFHQ, AFHQ, and LSUN datasets for image generation tasks.