Optimizing for Interpretability in Deep Neural Networks with Tree Regularization

Deep models have advanced prediction in many domains, but their lack of interpretability remains a key barrier to the adoption in many real world applications. This work introduces a novel approach to optimize deep models for interpretability by explicitly regularizing them to resemble compact, axis-aligned decision trees.

Data and Resources

Cite this as

Mike Wu, Sonali Parbhoo, Michael C. Hughes, Volker Roth, Finale Doshi-Velez (2024). Dataset: Optimizing for Interpretability in Deep Neural Networks with Tree Regularization. https://doi.org/10.57702/kju52djf

DOI retrieved: December 2, 2024

Additional Info

Field Value
Created December 2, 2024
Last update December 2, 2024
Defined In https://doi.org/10.48550/arXiv.1908.05254
Author Mike Wu
More Authors
Sonali Parbhoo
Michael C. Hughes
Volker Roth
Finale Doshi-Velez