Match!
Rachel K. E. Bellamy
IBM
Human–computer interactionInformation foragingSoftwareComputer scienceMultimedia
121Publications
23H-index
1,859Citations
What is this?
Publications 93
Newest
Feb 7, 2020 in AAAI (National Conference on Artificial Intelligence)
#1Yunfeng Zhang (IBM)H-Index: 5
#2Rachel K. E. Bellamy (IBM)H-Index: 23
Last. Kush R. Varshney (IBM)H-Index: 18
view all 3 authors...
Today, AI is increasingly being used in many high-stakes decision-making applications in which fairness is an important concern. Already, there are many examples of AI being biased and making questionable and unfair decisions. The AI research community has proposed many methods to measure and mitigate unwanted biases, but few of them involve inputs from human policy makers. We argue that because different fairness criteria sometimes cannot be simultaneously satisfied, and because achieving fairn...
Source
#1Yunfeng ZhangH-Index: 5
#2Q. Vera LiaoH-Index: 9
Last. Rachel K. E. BellamyH-Index: 23
view all 3 authors...
Today, AI is being increasingly used to help human experts make decisions in high-stakes scenarios. In these scenarios, full automation is often undesirable, not only due to the significance of the outcome, but also because human experts can draw on their domain knowledge complementary to the model's to ensure task success. We refer to these scenarios as AI-assisted decision making, where the individual strengths of the human and the AI come together to optimize the joint decision outcome. A key...
3 CitationsSource
#1Bhavya GhaiH-Index: 1
#2Q. Vera LiaoH-Index: 9
Last. Klaus MuellerH-Index: 40
view all 5 authors...
Active Learning (AL) is a human-in-the-loop Machine Learning paradigm favored for its ability to learn with fewer labeled instances, but the model's states and progress remain opaque to the annotators. Meanwhile, many recognize the benefits of model transparency for people interacting with ML models, as reflected by the surge of explainable AI (XAI) as a research field. However, explaining an evolving model introduces many open questions regarding its impact on the annotation quality and the ann...
Jul 17, 2019 in AAAI (National Conference on Artificial Intelligence)
#1Neil MallinarH-Index: 1
#2Abhishek ShahH-Index: 1
Last. Blake McGregorH-Index: 1
view all 12 authors...
Many conversational agents in the market today follow a standard bot development framework which requires training intent classifiers to recognize user input. The need to create a proper set of training examples is often the bottleneck in the development process. In many occasions agent developers have access to historical chat logs that can provide a good quantity as well as coverage of training examples. However, the cost of labeling them with tens to hundreds of intents often prohibits taking...
1 CitationsSource
#1Rachel K. E. Bellamy (IBM)H-Index: 23
#2Kuntal Dey (IBM)H-Index: 8
Last. Yunfeng Zhang (IBM)H-Index: 5
view all 17 authors...
Today, machine-learning software is used to help make decisions that affect people's lives. Some people believe that the application of such software results in fairer decisions because, unlike humans, machine-learning software generates models that are not biased. Think again. Machine-learning software is also biased, sometimes in similar ways to humans, often in different ways. While fair model- assisted decision making involves more than the application of unbiased models-consideration of app...
Source
1 CitationsSource
Fairness is an increasingly important concern as machine learning models are used to support decision making in high-stakes applications such as mortgage lending, hiring, and prison sentencing. This article introduces a new open-source Python toolkit for algorithmic fairness, AI Fairness 360 (AIF360), released under an Apache v2.0 license ( https://github.com/ibm/aif360 ). The main objectives of this toolkit are to help facilitate the transition of fairness research algorithms for use in an indu...
5 CitationsSource
Jun 18, 2019 in DIS (Designing Interactive Systems)
#1Erick Oduor (IBM)H-Index: 7
#2Carolyn Pang (McGill University)
view all 9 authors...
Information communication technologies for development (ICTD) can support people with chronic illnesses living in rural communities. In Kenya, ICTD use in areas where undetected cases of hypertension and high HIV infection rates exist is underexplored. Partnering with a health facility in Migori, Kenya, we report on the uses of technology in managing HIV. We see the use of technology to manage HIV was influenced by the roles and routines of patients and clinicians, trust between practitioners an...
Source
Mar 17, 2019 in IUI (Intelligent User Interfaces)
#1Jonathan Dodge (OSU: Oregon State University)H-Index: 5
#2Q. Vera Liao (IBM)H-Index: 9
Last. Casey Dugan (IBM)H-Index: 13
view all 5 authors...
Ensuring fairness of machine learning systems is a human-in-the-loop process. It relies on developers, users, and the general public to identify fairness problems and make improvements. To facilitate the process we need effective, unbiased, and user-friendly explanations that people can confidently rely on. Towards that end, we conducted an empirical study with four types of programmatically generated explanations to understand how they impact people's fairness judgments of ML systems. With an e...
Source
#1Tathagata Chakraborti (IBM)H-Index: 10
#2Kshitij P. Fadnis (IBM)H-Index: 2
Last. Rachel K. E. Bellamy (IBM)H-Index: 23
view all 7 authors...
Source
12345678910