Match!
  • References (27)
  • Citations (1)
Cite
References27
Newest
Published on Jun 1, 2018
國合會系統管理者1
Estimated H-index: 1
Published on 2018in arXiv: Artificial Intelligence
Sanjeeb Dash17
Estimated H-index: 17
,
Oktay Günlük15
Estimated H-index: 15
,
Dennis Wei
This paper considers the learning of Boolean rules in either disjunctive normal form (DNF, OR-of-ANDs, equivalent to decision rule sets) or conjunctive normal form (CNF, AND-of-ORs) as an interpretable model for classification. An integer program is formulated to optimally trade classification accuracy for rule simplicity. Column generation (CG) is used to efficiently search over an exponential number of candidate clauses (conjunctions or disjunctions) without the need for heuristic rule mining....
Published on Feb 1, 2018in Digital Signal Processing2.79
Grégoire Montavon15
Estimated H-index: 15
,
Wojciech Samek18
Estimated H-index: 18
,
Klaus-Robert Müller82
Estimated H-index: 82
Abstract This paper provides an entry point to the problem of interpreting a deep neural network model and explaining its predictions. It is based on a tutorial given at ICASSP 2017. As a tutorial paper, the set of methods covered here is not exhaustive, but sufficiently representative to discuss a number of questions in interpretability, technical challenges, and possible applications. The second part of the tutorial focuses on the recently proposed layer-wise relevance propagation (LRP) techni...
Published on Aug 1, 2017 in IJCAI (International Joint Conference on Artificial Intelligence)
Tyler McDonnell4
Estimated H-index: 4
(University of Texas at Austin),
Mucahid Kutlu5
Estimated H-index: 5
(Qatar University)
+ 1 AuthorsMatthew Lease23
Estimated H-index: 23
(University of Texas at Austin)
Published on Jun 11, 2017in arXiv: Artificial Intelligence
Amit Dhurandhar6
Estimated H-index: 6
,
Vijay S. Iyengar21
Estimated H-index: 21
+ 1 AuthorsKarthikeyan Shanmugam13
Estimated H-index: 13
We provide a novel notion of what it means to be interpretable, looking past the usual association with human understanding. Our key insight is that interpretability is not an absolute concept and so we define it relative to a target model, which may or may not be a human. We define a framework that allows for comparing interpretable procedures by linking it to important practical aspects such as accuracy and robustness. We characterize many of the current state-of-the-art interpretable methods ...
Published on May 31, 2017
Sandra Wachter7
Estimated H-index: 7
(University of Oxford),
Brent Mittelstadt9
Estimated H-index: 9
(University of Oxford),
Luciano Floridi42
Estimated H-index: 42
(University of Oxford)
To create fair and accountable AI and robotics, we need precise regulation and better methods to certify, explain, and audit inscrutable systems.
Published on May 1, 2017in International Data Privacy Law
Sandra Wachter7
Estimated H-index: 7
,
Brent Mittelstadt9
Estimated H-index: 9
,
Luciano Floridi42
Estimated H-index: 42
Since approval of the EU General Data Protection Regulation (GDPR) in 2016, it has been widely and repeatedly claimed that a ‘right to explanation’ of decisions made by automated or artificially intelligent algorithmic systems will be legally mandated by the GDPR. This right to explanation is viewed as an ideal mechanism to enhance the accountability and transparency of automated decision-making. However, there are several reasons to doubt both the legal existence and the feasibility of such a r...
Published on Jan 1, 2017in arXiv: Machine Learning
Finale Doshi20
Estimated H-index: 20
,
Been Kim14
Estimated H-index: 14
As machine learning systems become ubiquitous, there has been a surge of interest in interpretable machine learning: systems that provide explanation for their outputs. These explanations are often used to qualitatively assess other criteria such as safety or non-discrimination. However, despite the interest in interpretability, there is very little consensus on what interpretable machine learning is and how it should be measured. In this position paper, we first define interpretability and desc...
Published on Jan 1, 2017in arXiv: Learning
Osbert Bastani11
Estimated H-index: 11
,
Carolyn Kim3
Estimated H-index: 3
,
Hamsa Bastani5
Estimated H-index: 5
(UPenn: University of Pennsylvania)
Interpretability has become incredibly important as machine learning is increasingly used to inform consequential decisions. We propose to construct global explanations of complex, blackbox models in the form of a decision tree approximating the original model---as long as the decision tree is a good approximation, then it mirrors the computation performed by the blackbox model. We devise a novel algorithm for extracting decision tree explanations that actively samples new training points to avo...
Published on Jan 1, 2017in arXiv: Artificial Intelligence
Tim Miller20
Estimated H-index: 20
,
Piers Howe1
Estimated H-index: 1
,
Liz Sonenberg1
Estimated H-index: 1
In his seminal book `The Inmates are Running the Asylum: Why High-Tech Products Drive Us Crazy And How To Restore The Sanity' [2004, Sams Indianapolis, IN, USA], Alan Cooper argues that a major reason why software is often poorly designed (from a user perspective) is that programmers are in charge of design decisions, rather than interaction designers. As a result, programmers design software for themselves, rather than for their target audience, a phenomenon he refers to as the `inmates running...
Cited By1
Newest
Recently, a lot of attention has been given to undesired consequences of Artificial Intelligence (AI), such as unfair bias leading to discrimination, or the lack of explanations of the results of AI systems. There are several important questions to answer before AI can be deployed at scale in our businesses and societies. Most of these issues are being discussed by experts and the wider communities, and it seems there is broad consensus on where they come from. There is, however, less consensus ...
Published on Sep 6, 2019in arXiv: Artificial Intelligence
Vijay Arya , Rachel K. E. Bellamy22
Estimated H-index: 22
+ 17 AuthorsAleksandra Mojsilovic24
Estimated H-index: 24
As artificial intelligence and machine learning algorithms make further inroads into society, calls are increasing from multiple stakeholders for these algorithms to explain their outputs. At the same time, these stakeholders, whether they be affected citizens, government regulators, domain experts, or system developers, present different requirements for explanations. Toward addressing these needs, we introduce AI Explainability 360 (this http URL), an open-source software toolkit featuring eig...
Published on Jun 5, 2019in arXiv: Learning
Noel C. F. Codella14
Estimated H-index: 14
,
Michael Hind23
Estimated H-index: 23
(IBM)
+ 5 AuthorsAleksandra Mojsilovic24
Estimated H-index: 24
Using machine learning in high-stakes applications often requires predictions to be accompanied by explanations comprehensible to the domain user, who has ultimate responsibility for decisions and outcomes. Recently, a new framework for providing explanations, called TED, has been proposed to provide meaningful explanations for predictions. This framework augments training data to include explanations elicited from domain users, in addition to features and labels. This approach ensures that expl...