-
LEAMR dataset and aligner
We release a dataset of alignments for over 60,000 sentences along with our aligner code to facilitate more accurate models and greater interpretability in future AMR research. -
Latent Distance Guided Alignment Training for Large Language Models
Ensuring alignment with human preferences is a crucial characteristic of large language models (LLMs). Presently, the primary alignment methods, RLHF and DPO, require extensive... -
A general language assistant as a laboratory for alignment
A general language assistant for aligning language models with human users -
Alignment of language agents
A dataset for aligning language agents