-
Stochastic Concept Bottleneck Models
Concept Bottleneck Models (CBMs) have emerged as a promising interpretable method whose final prediction is based on intermediate, human-understandable concepts rather than the... -
Graph Neural Networks Including SparSe inTerpretability (GISST)
Graph Neural Networks (GNNs) are versatile, powerful machine learning methods that enable graph structure and feature representation learning, and have applications across many... -
Optimizing for Interpretability in Deep Neural Networks with Tree Regularization
Deep models have advanced prediction in many domains, but their lack of interpretability remains a key barrier to the adoption in many real world applications. This work... -
Human-in-the-Loop Interpretability Prior
The dataset used in the paper is a collection of datasets, including synthetic, mushroom, census, and covertype datasets.