You're currently viewing an old version of this dataset. To see the current version, click here.

DPOK: Reinforcement Learning for Fine-tuning Text-to-Image Diffusion Models

Learning from human feedback has been shown to improve text-to-image models. These techniques first learn a reward function that captures what humans care about in the task and then improve the models based on the learned reward function.

Data and Resources

This dataset has no data

Cite this as

Ying Fan, Olivia Watkins, Yuqing Du, Hao Liu, Moonkyung Ryu, Craig Boutilier, Pieter Abbeel, Mohammad Ghavamzadeh (2024). Dataset: DPOK: Reinforcement Learning for Fine-tuning Text-to-Image Diffusion Models. https://doi.org/10.57702/ltxezq8o

Private DOI This DOI is not yet resolvable.
It is available for use in manuscripts, and will be published when the Dataset is made public.

Additional Info

Field Value
Created December 2, 2024
Last update December 2, 2024
Defined In https://doi.org/10.48550/arXiv.2305.16381
Author Ying Fan
More Authors
Olivia Watkins
Yuqing Du
Hao Liu
Moonkyung Ryu
Craig Boutilier
Pieter Abbeel
Mohammad Ghavamzadeh
Homepage https://github.com/google-research/google-research/tree/master/dpok