Human Preference Data about Helpfulness and Harmlessness

The dataset is used for human alignment in large language models.

Data and Resources

Cite this as

Feiteng Fang, Liang Zhu, Min Yang, Xi Feng, Jinchang Hou, Qixuan Zhao, Chengming Li, Xiping Hu, Ruifeng Xu (2024). Dataset: Human Preference Data about Helpfulness and Harmlessness. https://doi.org/10.57702/wmeeo6fz

DOI retrieved: December 2, 2024

Additional Info

Field Value
Created December 2, 2024
Last update December 2, 2024
Defined In https://doi.org/10.48550/arXiv.2403.16649
Author Feiteng Fang
More Authors
Liang Zhu
Min Yang
Xi Feng
Jinchang Hou
Qixuan Zhao
Chengming Li
Xiping Hu
Ruifeng Xu
Homepage https://github.com/calubkk/CLHA