PLUM: Preference Learning Plus Test Cases Yields Better Code Language Models

Instruction-finetuned code language models have shown promise in various programming tasks. They are trained, using a language modeling objective, on natural language instructions and gold code snippet pairs.

Data and Resources

Cite this as

Dylan Zhang, Shizhe Diao, Xueyan Zou, Hao Peng (2024). Dataset: PLUM: Preference Learning Plus Test Cases Yields Better Code Language Models. https://doi.org/10.57702/jmb64hib

DOI retrieved: December 16, 2024

Additional Info

Field Value
Created December 16, 2024
Last update December 16, 2024
Defined In https://doi.org/10.48550/arXiv.2406.06887
Author Dylan Zhang
More Authors
Shizhe Diao
Xueyan Zou
Hao Peng
Homepage https://arxiv.org/abs/2304.12345