-
Style-guided generation with Stable Diffusion
The dataset used in this paper is a style-guided generation task with Stable Diffusion. The dataset contains images with different styles and prompts. -
E-WNC: Explainable Subjective Bias Style Transfer
We build two explainable style transfer datasets by augmenting existing datasets with synthetic textual explanations generated by a teacher model. -
E-GYAFC: Explainable Formality Style Transfer
We build two explainable style transfer datasets by augmenting existing datasets with synthetic textual explanations generated by a teacher model. -
ICLEF: In-Context Learning with Expert Feedback for Explainable Style
We build two explainable style transfer datasets by augmenting existing datasets with synthetic textual explanations generated by a teacher model. -
PACS dataset
The dataset used in the paper is a large collection of small images, each representing a patch of a jigsaw puzzle. The patches are of the same size and orientation, and the goal... -
Antholzer et al. dataset
A custom dataset created for this paper, consisting of 2107 point clouds, each with 16384 points, and three design point clouds: stripes, porous, and cut. -
Stable Diffusion Prompts
The dataset used in the paper for text-to-image generation and style transfer tasks. -
Denoising Diffusion Probabilistic Models for Styled Walking Synthesis
Motions are from two publicly available datasets: Xia et al. [Xia et al. 2015] and HumanAct12 [Guo et al. 2020].