Direct preference optimization: Your language model is secretly a reward model
Data and Resources
-
Original MetadataJSON
The json representation of the dataset with its distributions based on DCAT.
Cite this as
Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D Manning, Chelsea Finn (2024). Dataset: Direct preference optimization: Your language model is secretly a reward model. https://doi.org/10.57702/wgpeg5j4
DOI retrieved: December 2, 2024
Additional Info
Field | Value |
---|---|
Created | December 2, 2024 |
Last update | December 2, 2024 |
Author | Rafael Rafailov |
More Authors |
|