Match!

TED: Teaching AI to Explain its Decisions

Published on Jan 27, 2019 in AAAI (National Conference on Artificial Intelligence)
· DOI :10.1145/3306618.3314273
Michael Hind25
Estimated H-index: 25
(IBM),
Dennis Wei14
Estimated H-index: 14
(IBM)
+ 5 AuthorsKush R. Varshney18
Estimated H-index: 18
(IBM)
Abstract
Artificial intelligence systems are being increasingly deployed due to their potential to increase the efficiency, scale, consistency, fairness, and accuracy of decisions. However, as many of these systems are opaque in their operation, there is a growing demand for such systems to provide explanations for their decisions. Conventional approaches to this problem attempt to expose or discover the inner workings of a machine learning model with the hope that the resulting explanations will be meaningful to the consumer. In contrast, this paper suggests a new approach to this problem. It introduces a simple, practical framework, called Teaching Explanations for Decisions (TED), that provides meaningful explanations that match the mental model of the consumer. We illustrate the generality and effectiveness of this approach with two different examples, resulting in highly accurate explanations with no loss of prediction accuracy for these two examples.
  • References (30)
  • Citations (3)
📖 Papers frequently viewed together
2019AAAI: National Conference on Artificial Intelligence
6 Authors (Dino Pedreschi, ..., Franco Turini)
10 Citations
2019
2 Citations
78% of Scinapse members use related papers. After signing in, all features are FREE.
References30
Newest
40 Citations
#1Grégoire MontavonH-Index: 19
#2Wojciech SamekH-Index: 21
Last. Klaus-Robert MüllerH-Index: 92
view all 3 authors...
Abstract This paper provides an entry point to the problem of interpreting a deep neural network model and explaining its predictions. It is based on a tutorial given at ICASSP 2017. As a tutorial paper, the set of methods covered here is not exhaustive, but sufficiently representative to discuss a number of questions in interpretability, technical challenges, and possible applications. The second part of the tutorial focuses on the recently proposed layer-wise relevance propagation (LRP) techni...
295 CitationsSource
#1Tim MillerH-Index: 30
#2Piers HoweH-Index: 1
Last. Liz SonenbergH-Index: 23
view all 3 authors...
In his seminal book `The Inmates are Running the Asylum: Why High-Tech Products Drive Us Crazy And How To Restore The Sanity' [2004, Sams Indianapolis, IN, USA], Alan Cooper argues that a major reason why software is often poorly designed (from a user perspective) is that programmers are in charge of design decisions, rather than interaction designers. As a result, programmers design software for themselves, rather than for their target audience, a phenomenon he refers to as the `inmates running...
26 Citations
#1Andrew D. Selbst (Yale University)H-Index: 6
#2Julia Powles (St. John's College)H-Index: 8
46 CitationsSource
We provide a novel notion of what it means to be interpretable, looking past the usual association with human understanding. Our key insight is that interpretability is not an absolute concept and so we define it relative to a target model, which may or may not be a human. We define a framework that allows for comparing interpretable procedures by linking it to important practical aspects such as accuracy and robustness. We characterize many of the current state-of-the-art interpretable methods ...
7 Citations
#1Sandra Wachter (University of Oxford)H-Index: 9
#2Brent Mittelstadt (University of Oxford)H-Index: 13
Last. Luciano Floridi (University of Oxford)H-Index: 45
view all 3 authors...
To create fair and accountable AI and robotics, we need precise regulation and better methods to certify, explain, and audit inscrutable systems.
36 CitationsSource
#1Osbert BastaniH-Index: 12
#2Carolyn KimH-Index: 3
Last. Hamsa Bastani (UPenn: University of Pennsylvania)H-Index: 5
view all 3 authors...
Interpretability has become incredibly important as machine learning is increasingly used to inform consequential decisions. We propose to construct global explanations of complex, blackbox models in the form of a decision tree approximating the original model---as long as the decision tree is a good approximation, then it mirrors the computation performed by the blackbox model. We devise a novel algorithm for extracting decision tree explanations that actively samples new training points to avo...
30 Citations
#1Sandra WachterH-Index: 9
#2Brent MittelstadtH-Index: 13
Last. Luciano FloridiH-Index: 45
view all 3 authors...
Since approval of the EU General Data Protection Regulation (GDPR) in 2016, it has been widely and repeatedly claimed that a ‘right to explanation’ of decisions made by automated or artificially intelligent algorithmic systems will be legally mandated by the GDPR. This right to explanation is viewed as an ideal mechanism to enhance the accountability and transparency of automated decision-making. However, there are several reasons to doubt both the legal existence and the feasibility of such a r...
106 CitationsSource
As machine learning systems become ubiquitous, there has been a surge of interest in interpretable machine learning: systems that provide explanation for their outputs. These explanations are often used to qualitatively assess other criteria such as safety or non-discrimination. However, despite the interest in interpretability, there is very little consensus on what interpretable machine learning is and how it should be measured. In this position paper, we first define interpretability and desc...
379 Citations
#1Ian Goodfellow (Google)H-Index: 52
#2Yoshua Bengio (UdeM: Université de Montréal)H-Index: 122
Last. Aaron Courville (UdeM: Université de Montréal)H-Index: 56
view all 3 authors...
Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book ...
12.3k Citations
Cited By3
Newest
We study the XAI (explainable AI) on the face recognition task, particularly the face verification here. Face verification is a crucial task in recent days and it has been deployed to plenty of applications, such as access control, surveillance, and automatic personal log-on for mobile devices. With the increasing amount of data, deep convolutional neural networks can achieve very high accuracy for the face verification task. Beyond exceptional performances, deep face verification models need mo...
Feb 7, 2020 in AAAI (National Conference on Artificial Intelligence)
#1Julian Zucker (NU: Northeastern University)
#2Myraeka d'Leeuwen (NU: Northeastern University)
Source
#1Richard BenjaminsH-Index: 26
#2Alberto BarbadoH-Index: 2
Last. Daniel Sierra (Telefónica)H-Index: 1
view all 3 authors...
Recently, a lot of attention has been given to undesired consequences of Artificial Intelligence (AI), such as unfair bias leading to discrimination, or the lack of explanations of the results of AI systems. There are several important questions to answer before AI can be deployed at scale in our businesses and societies. Most of these issues are being discussed by experts and the wider communities, and it seems there is broad consensus on where they come from. There is, however, less consensus ...
1 Citations
#1Vijay AryaH-Index: 1
Last. Yunfeng ZhangH-Index: 19
view all 20 authors...
As artificial intelligence and machine learning algorithms make further inroads into society, calls are increasing from multiple stakeholders for these algorithms to explain their outputs. At the same time, these stakeholders, whether they be affected citizens, government regulators, domain experts, or system developers, present different requirements for explanations. Toward addressing these needs, we introduce AI Explainability 360 (this http URL), an open-source software toolkit featuring eig...
4 Citations
#1Noel C. F. CodellaH-Index: 18
#2Michael Hind (IBM)H-Index: 25
Last. Aleksandra MojsilovicH-Index: 24
view all 8 authors...