Using Explainable Artificial Intelligence to open black-box modelsView
Carla is a brazilian software engineer, master student at USP in Artificial Intelligence. She promotes diversity in technology as an organizer of perifaCode: a brazilian community that teaches computer science skills to black and underrepresented groups in Brazil. She believes Technology is steadily changing the social good landscape and has been researching about the unconscious bias in Artificial Intelligence and how to design more transparent and trustful algorithms.
Using Explainable Artificial Intelligence to open black-box models
As machine learning becomes a crucial component of a growing number of user-facing applications, interpretable machine learning has become an increasingly important area of research for several reasons. First, as humans are the ones who train, deploy, and often use the predictions of machine learning models in the real world, it is of utmost importance for us to be able to trust the model. Apart from indicators such as accuracy on sample instances, a user’s trust is directly impacted by how much they can understand and predict the model’s behavior, as opposed to treating it as a black box. The good news is that we have made great strides in some areas of explainable AI. The bad news is that creating explainable AI is not easy and simple as related in medium articles. In this talk, I defend that we should separate explanations from the model (i.e. being model agnostic) because true model interpretability will cost performance and accuracy.