Concept attribution: Explaining CNN decisions to physicians

Volume: 123, Pages: 103865 - 103865
Published: Aug 1, 2020
Abstract
Deep learning explainability is often reached by gradient-based approaches that attribute the network output to perturbations of the input pixels. However, the relevance of input pixels may be difficult to relate to relevant image features in some applications, e.g. diagnostic measures in medical imaging. The framework described in this paper shifts the attribution focus from pixel values to user-defined concepts. By checking if certain...
Paper Details
Title
Concept attribution: Explaining CNN decisions to physicians
Published Date
Aug 1, 2020
Volume
123
Pages
103865 - 103865
Citation AnalysisPro
  • Scinapse’s Top 10 Citation Journals & Affiliations graph reveals the quality and authenticity of citations received by a paper.
  • Discover whether citations have been inflated due to self-citations, or if citations include institutional bias.