-
Self-Guided Generation of Minority Samples Using Diffusion Models
We present a novel approach for generating minority samples that live on low-density regions of a data manifold. -
SRNDiff: Short-term precipitation nowcasting with condition diffusion model
Diffusion models are widely used in image generation because they can generate high-quality and realistic samples. -
DPM-Solver++
The dataset used in the paper is DPM-Solver++ -
Exact Diffusion Inversion via Bi-directional Integration Approximation
The dataset used in the paper is not explicitly described, but it is mentioned that the authors used a pre-trained model to generate images. -
RECAP: Principled Recaptioning Improves Image Generation
A text-to-image diffusion model trained on a recaptioned dataset to improve image generation quality and semantic alignment. -
GEODIFFUSION: TEXT-PROMPTED GEOMETRIC CONTROL FOR OBJECT DETECTION DATA GENER...
Diffusion models have attracted significant attention due to the remarkable ability to create content and generate data for tasks like image classification. However, the usage... -
FreeTuner Dataset
The dataset contains images of subjects and styles, used for training and testing the FreeTuner model. -
FreeTuner: Any Subject in Any Style with Training-free Diffusion
FreeTuner is a training-free method for compositional personalization that can generate any user-provided subject in any user-provided style. -
SD architecture
A dataset used for experiments with LoRA-enhanced distillation on guided diffusion models -
Concept Sliders Test Dataset
The dataset used for testing the Concept Sliders, consisting of paired image data and text prompts. -
Concept Sliders Dataset
The dataset used for training the Concept Sliders, consisting of paired image data and text prompts. -
Photorealistic text-to-image diffusion models with deep language understanding
The authors present a photorealistic text-to-image diffusion model with deep language understanding. -
D-Flow: Differentiating through Flows for Controlled Generation
The dataset used in the paper D-Flow: Differentiating through Flows for Controlled Generation -
ControlNet dataset
ControlNet dataset for image generation -
GEMRec-18K
The GEMRec-18K dataset contains 18 thousand images generated by 200 text-to-image diffusion models fine-tuned on Stable Diffusion. -
Non-linear Correction for Diffusion Model at Large Guidance Scale
The dataset used in the paper is a large-scale image generation dataset, which is used to evaluate the performance of the characteristic guidance method. -
UniControl: A Unified Diffusion Model for Controllable Visual Generation In t...
UniControl is a unified diffusion model for controllable visual generation in the wild, which is capable of simultaneously handling various visual conditions for the...