You're currently viewing an old version of this dataset. To see the current version, click here.

Language Models as Inductive Reasoners

Inductive reasoning is a core component of human intelligence. In the past research of inductive reasoning within computer science, logic language is used as representations of knowledge (facts and rules, more specifically). However, logic language can cause systematic problems for inductive reasoning such as disability of handling raw input such as natural language, sensitiveness to mislabeled data, and inca-pacity to handle ambiguous input. To this end, we propose a new task, which is to induce natural language rules from natural language facts, and create a dataset termed DEER containing 1.2k rule-fact pairs for the task, where rules and facts are written in natural language.

Data and Resources

This dataset has no data

Cite this as

Zonglin Yang, Li Dong, Xinya Du, Hao Cheng, Erik Cambria, Xiaodong Liu, Jianfeng Gao, Furu Wei (2024). Dataset: Language Models as Inductive Reasoners. https://doi.org/10.57702/h5kzw4gl

Private DOI This DOI is not yet resolvable.
It is available for use in manuscripts, and will be published when the Dataset is made public.

Additional Info

Field Value
Created December 16, 2024
Last update December 16, 2024
Defined In https://doi.org/10.48550/arXiv.2212.10923
Author Zonglin Yang
More Authors
Li Dong
Xinya Du
Hao Cheng
Erik Cambria
Xiaodong Liu
Jianfeng Gao
Furu Wei