-
Anthropic Helpfulness Base eval
The dataset used in the paper is the Anthropic Helpfulness Base eval dataset. -
Anthropic Helpfulness Base
The dataset used in the paper is the Anthropic Helpfulness Base train dataset and the Anthropic Helpfulness eval dataset. -
Multimodal Query Suggestion with Multi-Agent Reinforcement Learning from Huma...
Multimodal Query Suggestion with Multi-Agent Reinforcement Learning from Human Feedback -
HIVE: Harnessing Human Feedback for Instructional Visual Editing
The dataset used in the paper Harnessing Human Feedback for Instructional Visual Editing (HIVE) for instructional visual editing. -
Differences in Fairness Preferences
A crowdsourced dataset for studying differences in fairness preferences depending on demographic identities. -
Anthropic HH dataset
The Anthropic HH dataset is a general-purpose preference dataset for helpfulness and harmlessness. -
Training a helpful and harmless assistant with reinforcement learning from hu...
The authors propose a novel approach that incorporates parameter-efficient tuning to better optimize control tokens, thus benefitting controllable generation. -
SHP dataset
The SHP dataset is used to evaluate the performance of the proposed Compositional Preference Models (CPMs). -
HH-RLHF dataset
The HH-RLHF dataset is used to evaluate the performance of the proposed Compositional Preference Models (CPMs). -
Toxic-DPO Dataset
The dataset used in the paper is the Toxic-DPO dataset, which is used for reinforcement learning from human feedback. -
Anthropic-HH-RLHF Dataset
The dataset used in the paper is the Anthropic-HH-RLHF dataset, which is used for reinforcement learning from human feedback. -
UltraRM-13B
The UltraRM-13B dataset is a collection of human feedback for language model training. -
AlpacaFarm
The AlpacaFarm dataset is a large-scale dataset for preference optimization, which consists of a set of instructions and their corresponding responses. -
Anthropic-HH
The Anthropic-HH dataset is a collection of human feedback for language model training.