Ethics in AI Seminar - Mind the Gap: From Predictions to ML-Informed Decisions

The Institute for Foundations of Machine Learning (IFML) and Good Systems are hosting Prof. Maria De-Arteaga as part of their collaborative Ethics in AI Seminar Series. Join us at 1 pm on Monday, April 12, 2021!

 

The seminar series examines use-inspired cases that address the ethical foundations of AI/ML. Our current focus is on the topic of fairness, whether it relates to bias, algorithmic justice, or another dimension of equity. Presentations hone in on the intersection of the technical and ethical challenges, and articulate how seemingly technical strategies might have broader societal impacts. Each presentation is followed by a dialogue among IFML and Good Systems affiliates, and guests.

 

Mind the Gap: From Predictions to ML-Informed Decisions

Maria De-Arteaga, Assistant Professor,
Department of Information, Risk and Operations Management, University of Texas at Austin
Monday, April 12, 2021
1 - 2 p.m. CT

Register for the zoom link at https://events.attend.com/f/1383793500 

Abstract: Machine learning (ML) is increasingly being used to support decision-making in critical settings, where predictions have potentially grave implications over human lives. In this talk, Prof. De-Arteaga will discuss the gap that exists between ML predictions and ML-informed decisions. The first part of the talk will highlight the role of humans-in-the-loop through a study of the adoption of an algorithmic tool used to assist child maltreatment hotline screening decisions. We will focus on the question: Are humans capable of identifying cases in which the machine is wrong, and of overriding those recommendations? The second part of the talk will focus on the gap between the observed outcome that the algorithm optimizes for and the construct of interest to experts. Prof. De-Arteaga proposes influence functions based methodology to reduce this gap by extracting knowledge from experts’ historical decisions. In the context of child maltreatment hotline screenings, we find that (1) there are high-risk cases whose risk is considered by the experts but not wholly captured in the target labels used to train a deployed model, and (2) the proposed approach improves recall for these cases.

Maria De-Arteaga is an assistant professor at the Information, Risk and Operations Management Department at the University of Texas at Austin. She received a joint doctorate in machine learning and public policy from Carnegie Mellon University. Her research focuses on the risks and opportunities of using machine learning for decision support in high-stakes settings. Her work has been awarded the Best Thematic Paper Award at NAACL’19, the Innovation Award on Data Science at Data for Policy’16, and has been featured by UN Women and Global Pulse in their report “Gender Equality and Big Data: Making Gender Data Visible.” She is a recipient of a 2020 Google Award for Inclusion Research, a 2018 Microsoft Research Dissertation Grant, and was named an EECS 2019 Rising Star. In 2017, she co-founded the Machine Learning for the Developing World (ML4D) Workshop series at NeurIPS.