You're currently viewing an old version of this dataset. To see the current version, click here.

Toxic-DPO Dataset

The dataset used in the paper is the Toxic-DPO dataset, which is used for reinforcement learning from human feedback.

Data and Resources

Cite this as

Unalignment (2024). Dataset: Toxic-DPO Dataset. https://doi.org/10.57702/eflhjtjl

DOI retrieved: December 2, 2024

Additional Info

Field Value
Created December 2, 2024
Last update December 2, 2024
Author Unalignment
Homepage https://huggingface.co/datasets/unalignment/toxic-dpo-v0.2