Stakeholders in explainable AI
Abstract
There is general consensus that it is important for artificial
intelligence (AI) and machine learning systems to be explainable
and/or interpretable. However, there is no general
consensus over what is meant by ‘explainable’ and ‘interpretable’.
In this paper, we argue that this lack of consensus
is due to there being several distinct stakeholder communities.
We note that, while the concerns of the individual
communities are broadly...
Paper Details
Title
Stakeholders in explainable AI
Published Date
Sep 29, 2018
Journal
Citation AnalysisPro
You’ll need to upgrade your plan to Pro
Looking to understand the true influence of a researcher’s work across journals & affiliations?
- Scinapse’s Top 10 Citation Journals & Affiliations graph reveals the quality and authenticity of citations received by a paper.
- Discover whether citations have been inflated due to self-citations, or if citations include institutional bias.
Notes
History