-
TIE: Topological Information Enhanced Structural Reading Comprehension on Web...
Topological Information Enhanced model (TIE) for web-based structural reading comprehension on web pages. -
Multi-view Content-aware Indexing for Long Document Retrieval
Long document question answering (DocQA) aims to answer questions from long documents over 10k words. They usually contain content structures such as sections, sub-sections, and... -
DQG dataset for reading comprehension
The dataset for Difficulty-controllable Question Generation (DQG) for reading comprehension, prepared by the authors. -
Causal-VidQA
This dataset is used in the paper to evaluate the performance of the TranSTR architecture. -
ProofWriter
ProofWriter: Generating implications, proofs, and abductive statements over natural language -
ActivityNet-QA
Video question answering (VideoQA) is an essential task in vision-language understanding, which has attracted numerous research attention recently. Nevertheless, existing works... -
Simple Question dataset
The dataset used in this paper is a set of categorical probability distributions for a finite set of categories A = {a1,..., ak}. The dataset is used to evaluate the proposed... -
CelebA-spoof: Large-scale face anti-spoofing dataset with rich annotations
A face anti-spoofing dataset with rich annotations, focusing on questions with a single entity and relation. -
ENsEN Dataset
The ENsEN dataset is used to evaluate the usefulness of semantic snippets. It contains 10 tasks, each with three questions on a common topic. -
Dataset for Evaluating Query-biased Ranking of LOD Resources
The dataset is used to evaluate the query-biased ranking of LOD resources. It contains 30 queries, 150 HTML Web pages, and 81 detected resources by Web page. -
ProKnow-data
The ProKnow-data dataset is a collection of diagnostic conversations guided by safety constraints and ProKnow that healthcare professionals use. -
Star: A Benchmark for Situated Reasoning in Real-World Videos
The STAR dataset provides 60K situated reasoning questions based on 22K trimmed situation video clips. -
Agqa: A Benchmark for Compositional Spatio-Temporal Reasoning
The AGQA benchmark is a visual dataset comprising 192M hand-crafted questions about 9.6K videos from the Charades dataset.