Responsible AI
🏛️

Responsible AI

About This Research Theme

Over the past few decades, we have made significant strides toward building an equitable society for diverse groups of people, and we cannot let the integration of AI into society regress our efforts. Instead, we want AI to embody and promote essential societal values such as fairness and diversity. Many past examples have shown challenges in building ethical AI systems. Especially, discriminatory prediction algorithms that inherit biases and stereotypes of the society have become a hotly-debated issue. As AI finds its way into areas we might not have foreseen, other subtler forms of ethical concerns will continuously emerge beyond such overt forms of discrimination, rendering the development of ethical AI not only a pressing task at hand but also a problem that would persist in our future with AI. Our lab’s goal is to identify the ever-evolving ethical challenges presented by AI-powered systems and develop a mathematical framework and practical tools that can effectively quantify and mitigate such ethical issues.

👩🏻‍🔬 Projects in this Theme

📚 Selected Publications

  • Alghamdi, Wael, Hsiang Hsu, Haewon Jeong, Hao Wang, Peter Michalak, Shahab Asoodeh, and Flavio Calmon. "Beyond Adult and COMPAS: Fair multi-class prediction via information projection." Advances in Neural Information Processing Systems 35 (2022).
  • Jeong, Haewon, Hao Wang, and Flavio P. Calmon. "Fairness without imputation: A decision tree approach for fair prediction with missing values." In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 9, pp. 9558-9566 (2022).
  • Jeong, Haewon, Michael D. Wu, Nilanjana Dasgupta, Muriel Médard, and Flavio Calmon. "Who Gets the Benefit of the Doubt? Racial Bias in Machine Learning Algorithms Applied to Secondary School Math Education." AIED (2022).