Visual Classification as Linear Combination of Words
Explainability is a longstanding challenge in deep learning, especially in high-stakes domains like healthcare. Common explainability methods highlight image regions that drive an AI model’s decision. Humans, however, heavily rely on language to convey explanations of not only “where” but “what”. Additionally, most explainability approaches focus on explaining individual AI predictions, rather than describing the features used by an AI model in general.
BibTex: