We know people and AI can be more successful together than apart, but we also know that AI can be used in ways that are harmful. How can we develop and evaluate intelligent systems and avoid dangerous unintended consequences? What makes a system “good”?
While machine learning has the potential to remake the world around us, it also carries ethical and social risks stemming from concerns such as privacy, fairness, and explainability. The Machine Learning Laboratory believes that we have an obligation to develop the next generation of algorithms and machine learning applications in lock step with research that works to ensure that these methods are developed responsibly.
For that reason, we are partnering the ‘Good Systems’ project and its team of policy researchers, philosophers, and ethicists to ensure that the technologies we develop are not only innovative and cutting edge, but ethical, socially conscious, and beneficial to society in a manner that doesn’t inadvertently introduce risk and cause harm. For more on on Good Systems, visit bridgingbarriers.utexas.edu/good-systems/