Ensuring Fairness in AI: Mitigating Bias with Python Libraries and Frameworks

  • Home /
  • Schedule /
  • Ensuring Fairness in AI: Mitigating Bias with Python Libraries and Frameworks

Ensuring Fairness in AI: Mitigating Bias with Python Libraries and Frameworks

  • Emerging Technologies
  • Short Talk
  • Intermediate
  • Image Description

    By Kweyakie Afi Blebo

    Data Scientist

    Abstract:

    As someone passionate about the ethical implications of AI, I’ve delved into how bias in machine learning can lead to unfair outcomes, especially for marginalized communities. In this talk, I’ll explore how Python can be used to tackle these biases and help ensure fairness in AI models.

    I’ll introduce you to key Python libraries and frameworks like Fairlearn, AI Fairness 360, and Themis-ML, which are powerful tools for detecting and mitigating bias. I’ll walk you through examples that demonstrate how these tools work and how they can be integrated into your projects.

    We’ll also discuss the challenges I’ve faced along the way, and I’ll be honest about the limitations of current tools. But most importantly, I’ll share why it’s crucial that we, as developers and data scientists, take responsibility for the fairness of the AI systems we create. By the end of this talk, I hope you’ll be inspired to integrate these practices into your work and contribute to building AI that’s not just smart, but also fair.


    GO BACK
    Image Description