tools

Improve fairness of AI systems

Fairlearn is an open-source, community-driven project to help data scientists improve fairness of AI systems. Fairlearn is an open-source Python toolkit developed by Microsoft Research that focuses on promoting fairness in machine learning models. The Fairlearn toolkit provides tools and algorithms to help developers assess and mitigate various types of unfair biases that can arise in predictive models.

The What-if-Tool

The What-if Tool (WIT) is an open-source, interactive visualization tool developed by Google’s PAIR (People + AI Research) team. Its primary purpose is to help machine learning practitioners and developers understand the behavior of machine learning models and their outputs.

AI Fairness 360 (AIF360)

The AI Fairness 360 toolkit (AIF360) is an open source software toolkit that can help detect and remove bias in machine learning models. It enables developers to use state-of-the-art algorithms to regularly check for unwanted biases from entering their machine learning pipeline and to mitigate any biases that are discovered.

The Bias and Fairness Audit Toolkit for ML

Aequitas is an open-source bias audit toolkit for machine learning developers, analysts, and policymakers to audit machine learning models for discrimination and bias, and to make informed and equitable decisions around developing and deploying predictive risk-assessment tools.