As machine learning is increasingly used to inform decision-making in consequential real-world settings (e.g., pre-trial bail, loan approval, or prescribing life-altering medication), it becomes important to explain how the system arrived at its decision, and also suggest actions to achieve a favorable decision.
My thesis objective is to study, design, and deploy methods to address the second question, specifically on generating counterfactual explanations and minimal interventions. Thus my focus is on the intersection of machine learning interpretability, causal and probabilistic modelling, and social philosophy and psychology.
Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS), 108, pages: 895-905, Proceedings of Machine Learning Research, (Editors: Silvia Chiappa and Roberto Calandra), PMLR, August 2020 (conference)
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems