-
Deep Saliency Prior for Reducing Visual Distraction
A dataset of images used to evaluate the proposed method for reducing visual distraction in images. -
SemanticGAN
SemanticGAN uses a dataset of real images and their corresponding semantic segmentation masks. -
DatasetGAN
DatasetGAN uses a dataset of real images and their corresponding semantic segmentation masks. -
InstructPix2Pix
InstructPix2Pix is a model for image editing tasks, including removing objects from images. -
Visual ChatGPT
Visual ChatGPT is a system that integrates different Visual Foundation Models to understand visual information and generation corresponding answers. -
HIVE: Harnessing Human Feedback for Instructional Visual Editing
The dataset used in the paper Harnessing Human Feedback for Instructional Visual Editing (HIVE) for instructional visual editing. -
MDP: A Generalized Framework for Text-Guided Image Editing by Manipulating th...
Text-guided image editing using diffusion models -
User-Controllable Latent Transformer for StyleGAN Image Layout Editing
The dataset used in the paper for user-controllable latent code transformation for StyleGAN image layout editing. -
Pacific Graphics 2022
The dataset used in the paper for user-controllable latent code transformation for StyleGAN image layout editing. -
Zero-shot semantic image editing dataset
The zero-shot semantic image editing dataset, which consists of a set of 150 tuples, each containing a source image, a source text, and a target text. -
Stable Diffusion Model (SDM) v1.5
The dataset used in the paper is not explicitly described, but it is mentioned that the authors used the Stable Diffusion Model (SDM) v1.5. -
LASPA: Latent Spatial Alignment for Fast Training-free Single Image Editing
A novel framework for single-image editing using pre-trained text-to-image diffusion models. -
Quantitative Evaluation Dataset
A dataset comprising 3000 images for quantitative evaluation of specified region customization with both text and image inputs. -
Instruct-Video2Avatar
Given a short monocular RGB video and text instructions, our method uses an image-conditioned diffusion model to edit one head image and uses the video stylization method to... -
Region-Aware Diffusion for Zero-Shot Text-Driven Image Editing
The Region-Aware Diffusion for Zero-Shot Text-Driven Image Editing dataset is used to evaluate the ability of models to edit images based on text descriptions. -
FlickrFaces-HQ
StyleCLIP-FEU uses FlickrFaces-HQ (FFHQ) as the image corpus I.