You're currently viewing an old version of this dataset. To see the current version, click here.

Direct preference optimization: Your language model is secretly a reward model

The dataset used in the paper is not explicitly described. However, it is mentioned that the authors used a language model to optimize the performance of a reinforcement learning algorithm.

Data and Resources

Cite this as

Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D Manning, Chelsea Finn (2024). Dataset: Direct preference optimization: Your language model is secretly a reward model. https://doi.org/10.57702/wgpeg5j4

DOI retrieved: December 2, 2024

Additional Info

Field Value
Created December 2, 2024
Last update December 2, 2024
Author Rafael Rafailov
More Authors
Archit Sharma
Eric Mitchell
Stefano Ermon
Christopher D Manning
Chelsea Finn