Optimizing for Interpretability in Deep Neural Networks with Tree Regularization
Deep models have advanced prediction in many domains, but their lack of interpretability remains a key barrier to the adoption in many real world applications. This work introduces a novel approach to optimize deep models for interpretability by explicitly regularizing them to resemble compact, axis-aligned decision trees.
BibTex: