Match!
  • References (20)
  • Citations (2)
Cite
References20
Newest
Published on Jan 29, 2019in arXiv: Applications
Jiahao Chen6
Estimated H-index: 6
,
Jiahao Chen + 2 AuthorsMadeleine Udell9
Estimated H-index: 9
(Cornell University)
Assessing the fairness of a decision making system with respect to a protected class, such as gender or race, is challenging when class membership labels are unavailable. Probabilistic models for predicting the protected class based on observable proxies, such as surname and geolocation for race, are sometimes used to impute these missing labels for compliance assessments. Empirically, these methods are observed to exaggerate disparities, but the reason why is unknown. In this paper, we decompos...
Published on Jul 30, 2018
Abhinav Maurya2
Estimated H-index: 2
(CMU: Carnegie Mellon University)
Published on Jul 1, 2018in arXiv: Learning
Maya R. Gupta21
Estimated H-index: 21
,
Andrew Cotter12
Estimated H-index: 12
+ 1 AuthorsSerena Wang2
Estimated H-index: 2
We consider the problem of improving fairness when one lacks access to a dataset labeled with protected groups, making it difficult to take advantage of strategies that can improve fairness but require protected group labels, either at training or runtime. To address this, we investigate improving fairness metrics for proxy groups, and test whether doing so results in improved fairness for the true sensitive groups. Results on benchmark and real-world datasets demonstrate that such a proxy fairn...
Published on Jun 20, 2018in The Compass
Skyler Speakman (IBM), Srihari Sridharan1
Estimated H-index: 1
(IBM),
Isaac Markus1
Estimated H-index: 1
(IBM)
Mobile money platforms are gaining traction across developing markets as a convenient way of sending and receiving money over mobile phones. Recent joint collaborations between banks and mobile-network operators leverage a customer's past mobile phone transactions in order to create a credit score for the individual. In this work, we address the problem of launching a mobile-phone based credit scoring system in a new market without the marginal distribution of features of borrowers in the new ma...
Published on Apr 23, 2018 in WWW (The Web Conference)
Emmanouil Krasanakis1
Estimated H-index: 1
,
Eleftherios Spyromitros-Xioufis7
Estimated H-index: 7
+ 1 AuthorsYiannis Kompatsiaris18
Estimated H-index: 18
Machine learning bias and fairness have recently emerged as key issues due to the pervasive deployment of data-driven decision making in a variety of sectors and services. It has often been argued that unfair classifications can be attributed to bias in training data, but previous attempts to ‘repair’ training data have led to limited success. To circumvent shortcomings prevalent in data repairing approaches, e.g. by weighting training samples of the sensitive group (e.g. gender, race, financial...
Published on 2017in arXiv: Machine Learning
Pang Wei Koh11
Estimated H-index: 11
,
Percy Liang39
Estimated H-index: 39
How can we explain the predictions of a black-box model? In this paper, we use influence functions -- a classic technique from robust statistics -- to trace a model's prediction through the learning algorithm and back to its training data, thereby identifying training points most responsible for a given prediction. To scale up influence functions to modern machine learning settings, we develop a simple, efficient implementation that requires only oracle access to gradients and Hessian-vector pro...
Published on 2017in arXiv: Computers and Society
Sam Corbett-Davies6
Estimated H-index: 6
,
Emma Pierson6
Estimated H-index: 6
+ 2 AuthorsAziz Z. Huq20
Estimated H-index: 20
Algorithms are now regularly used to decide whether defendants awaiting trial are too dangerous to be released back into the community. In some cases, black defendants are substantially more likely than white defendants to be incorrectly classified as high risk. To mitigate such disparities, several techniques recently have been proposed to achieve algorithmic fairness. Here we reformulate algorithmic fairness as constrained optimization: the objective is to maximize public safety while satisfyi...
Published on 2017in Journal of Machine Learning Research4.09
Atilim Gunes Baydin8
Estimated H-index: 8
(University of Oxford),
Barak A. Pearlmutter30
Estimated H-index: 30
(MU: Maynooth University)
+ 1 AuthorsJeffrey Mark Siskind26
Estimated H-index: 26
(Purdue University)
Derivatives, mostly in the form of gradients and Hessians, are ubiquitous in machine learning. Automatic differentiation (AD), also called algorithmic differentiation or simply "auto-diff", is a family of techniques similar to but more general than backpropagation for efficiently and accurately evaluating derivatives of numeric functions expressed as computer programs. AD is a small but established field with applications in areas including computational uid dynamics, atmospheric sciences, and e...
Published on Jun 1, 2017in arXiv: Applications
ChouldechovaAlexandra1
Estimated H-index: 1
Abstract Recidivism prediction instruments (RPIs) provide decision-makers with an assessment of the likelihood that a criminal defendant will reoffend at a future point in time. Although such instruments are gaining increasing popularity across the country, their use is attracting tremendous controversy. Much of the controversy concerns potential discriminatory bias in the risk assessments that are produced. This article discusses several fairness criteria that have recently been applied to asse...
Published on Jan 1, 2015
Tamara Cook1
Estimated H-index: 1
,
Claudia McKay1
Estimated H-index: 1
(World Bank)
Cited By2
Newest
Published on May 28, 2019in arXiv: Computers and Society
Kush R. Varshney15
Estimated H-index: 15
(IBM),
Aleksandra Mojsilovic24
Estimated H-index: 24
The AI for social good movement has now reached a state in which a large number of one-off demonstrations have illustrated that partnerships of AI practitioners and social change organizations are possible and can address problems faced in sustainable development. In this paper, we discuss how moving from demonstrations to true impact on humanity will require a different course of action, namely open platforms containing foundational AI capabilities to support common needs of multiple organizati...
Published on Jun 10, 2019in arXiv: Machine Learning
Pranjal Awasthi16
Estimated H-index: 16
(RU: Rutgers University),
Matthäus Kleindessner3
Estimated H-index: 3
(RU: Rutgers University),
Jamie Morgenstern14
Estimated H-index: 14
(Georgia Institute of Technology)
Most approaches for ensuring or improving a model's fairness with respect to a protected attribute (such as race or gender) assume access to the true value of the protected attribute for every data point. In many scenarios, however, perfect knowledge of the protected attribute is unrealistic. In this paper, we ask to what extent fairness interventions can be effective even with imperfect information about the protected attribute. In particular, we study this question in the context of the promin...
Published on Jun 3, 2019in arXiv: Learning
Dennis Wei11
Estimated H-index: 11
,
Karthikeyan Natesan Ramamurthy12
Estimated H-index: 12
,
Flávio du Pin Calmon12
Estimated H-index: 12
This paper considers fair probabilistic classification where the outputs of primary interest are predicted probabilities, commonly referred to as scores. We formulate the problem of transforming scores to satisfy fairness constraints that are linear in conditional means of scores while minimizing the loss in utility. The same formulation can be applied both to post-process classifier outputs as well as to pre-process training data. We derive a closed-form expression for the optimal transformed s...
Published on Jan 1, 2019in arXiv: Learning
Dylan Slack1
Estimated H-index: 1
(UCI: University of California, Irvine),
Sorelle A. Friedler12
Estimated H-index: 12
(Haverford College),
Emile Givental (Haverford College)
In this paper, we advocate for the study of fairness techniques in low data situations. We propose two algorithms Fairness Warnings and Fair-MAML. The first is a model-agnostic algorithm that provides interpretable boundary conditions for when a fairly trained model may not behave fairly on similar but slightly different tasks within a given domain. The second is a fair meta-learning approach to train models that can be trained through gradient descent with the objective of "learning how to lear...