Detecting Hallucinated Content in Conditional Neural Sequence Generation

Neural sequence models can generate highly fluent sentences, but recent studies have also shown that they are also prone to hallucinate additional content not supported by the input.

Data and Resources

Cite this as

Chunting Zhou, Graham Neubig, Jiatao Gu, Mona Diab, Paco Guzman, Luke Zettlemoyer (2024). Dataset: Detecting Hallucinated Content in Conditional Neural Sequence Generation. https://doi.org/10.57702/9ymvoxih

DOI retrieved: December 16, 2024

Additional Info

Field Value
Created December 16, 2024
Last update December 16, 2024
Defined In https://doi.org/10.48550/arXiv.2011.02593
Author Chunting Zhou
More Authors
Graham Neubig
Jiatao Gu
Mona Diab
Paco Guzman
Luke Zettlemoyer
Homepage https://github.com/violet-zct/fairseq-detect-hallucination