Match!
Dennis Wei
IBM
Machine learningMathematical optimizationResource allocationMathematicsComputer science
72Publications
14H-index
757Citations
What is this?
Publications 70
Newest
#1Chirag Nagpal (CMU: Carnegie Mellon University)H-Index: 1
#2Dennis Wei (IBM)H-Index: 14
Last. Kush R. Varshney (IBM)H-Index: 18
view all 7 authors...
Source
#1Michael HindH-Index: 25
#2Dennis WeiH-Index: 14
Last. Yunfeng ZhangH-Index: 5
view all 3 authors...
Many proposed methods for explaining machine learning predictions are in fact challenging to understand for nontechnical consumers. This paper builds upon an alternative consumer-driven approach called TED that asks for explanations to be provided in training data, along with target labels. Using semi-synthetic data from credit approval and employee retention applications, experiments are conducted to investigate some practical considerations with TED, including its performance with different cl...
#1Sanghamitra Dutta (CMU: Carnegie Mellon University)H-Index: 8
#1Sourabh Dutta (CMU: Carnegie Mellon University)H-Index: 30
Last. Kush R. Varshney (IBM)H-Index: 18
view all 6 authors...
Our goal is to understand the so-called trade-off between fairness and accuracy. In this work, using a tool from information theory called Chernoff information, we derive fundamental limits on this relationship that explain why the accuracy on a given dataset often decreases as fairness increases. Novel to this work, we examine the problem of fair classification through the lens of a mismatched hypothesis testing problem, i.e., where we are trying to find a classifier that distinguishes between ...
1 Citations
#1Fredrik D. Johansson (Chalmers University of Technology)H-Index: 11
#2Dennis WeiH-Index: 14
Last. Kush R. VarshneyH-Index: 18
view all 7 authors...
Overlap between treatment groups is required for nonparametric estimation of causal effects. If a subgroup of subjects always receives (or never receives) a given intervention, we cannot estimate the effect of intervention changes on that subgroup without further assumptions. When overlap does not hold globally, characterizing local regions of overlap can inform the relevance of any causal conclusions for new subjects, and can help guide additional data collection. To have impact, these descript...
Jun 9, 2019 in ICML (International Conference on Machine Learning)
#1Sanjeeb Dash (IBM)H-Index: 18
#2Tian Gao (IBM)H-Index: 5
Last. Dennis Wei (IBM)H-Index: 14
view all 4 authors...
3 Citations
#1Dennis Wei (IBM)H-Index: 14
#2Sanjeeb DashH-Index: 18
Last. Oktay GünlükH-Index: 20
view all 4 authors...
This paper considers generalized linear models using rule-based features, also referred to as rule ensembles, for regression and probabilistic classification. Rules facilitate model interpretation while also capturing nonlinear dependences and interactions. Our problem formulation accordingly trades off rule set complexity and prediction accuracy. Column generation is used to optimize over an exponentially large space of rules without pre-generating a large subset of candidates or greedily boost...
1 Citations
#1Barbara A. HanH-Index: 1
#2Subhabrata Majumdar (UF: University of Florida)H-Index: 4
Last. Kush R. Varshney (IBM)H-Index: 18
view all 11 authors...
Abstract The recent Zika virus (ZIKV) epidemic in the Americas ranks among the largest outbreaks in modern times. Like other mosquito-borne flaviviruses, ZIKV circulates in sylvatic cycles among primates that can serve as reservoirs of spillover infection to humans. Identifying sylvatic reservoirs is critical to mitigating spillover risk, but relevant surveillance and biological data remain limited for this and most other zoonoses. We confronted this data sparsity by combining a machine learning...
2 CitationsSource
Jan 27, 2019 in AAAI (National Conference on Artificial Intelligence)
#1Michael Hind (IBM)H-Index: 25
#2Dennis Wei (IBM)H-Index: 14
Last. Kush R. Varshney (IBM)H-Index: 18
view all 8 authors...
Artificial intelligence systems are being increasingly deployed due to their potential to increase the efficiency, scale, consistency, fairness, and accuracy of decisions. However, as many of these systems are opaque in their operation, there is a growing demand for such systems to provide explanations for their decisions. Conventional approaches to this problem attempt to expose or discover the inner workings of a machine learning model with the hope that the resulting explanations will be mean...
3 CitationsSource
#1Noel C. F. CodellaH-Index: 18
#2Michael Hind (IBM)H-Index: 25
Last. Aleksandra MojsilovicH-Index: 24
view all 8 authors...
Using machine learning in high-stakes applications often requires predictions to be accompanied by explanations comprehensible to the domain user, who has ultimate responsibility for decisions and outcomes. Recently, a new framework for providing explanations, called TED, has been proposed to provide meaningful explanations for predictions. This framework augments training data to include explanations elicited from domain users, in addition to features and labels. This approach ensures that expl...
#1Vijay AryaH-Index: 1
Last. Yunfeng ZhangH-Index: 19
view all 20 authors...
As artificial intelligence and machine learning algorithms make further inroads into society, calls are increasing from multiple stakeholders for these algorithms to explain their outputs. At the same time, these stakeholders, whether they be affected citizens, government regulators, domain experts, or system developers, present different requirements for explanations. Toward addressing these needs, we introduce AI Explainability 360 (this http URL), an open-source software toolkit featuring eig...
4 Citations
1234567