Keegan Hines

Counterfactual Explanations

At ArthurAI, we're interested in many topics surrounding Responsible AI, and especially interested into techniques that can make ML systems more unterstandable and intepretable. An emerging subfield in Explainable AI is that of Counterfactual Explanations. The goal of these methods is to help a user understand why a decision was made by an ML system, by offering a simple and actionable desciption of what factors might change the decision at a future time. This field has been growing rapidly over just the past few years and in order to help organize and keep up with the research, we put together this survey paper describing recent trends and open challenges. We presented this work at the NeurIPS RSA Workshop, where we were honored to receive a Best Paper Award. This is a fascinating field and important part of making ML systems more trustworthy.