New features are designed to quantify and mitigate potential harmful bias as well as accelerate model development lifecycles.
Be the institution both customers and regulators trust.
As you work to employ AI across your digital customer engagement, the sneaky ways harmful bias can creep into models should be enough to worry anyone – not just those in charge of risk and ethics. xAI Workbench version 1.9 introduces built-in bias detection features that deliver fairness through awareness.
Start exploring today
Customers interested in seeing how this new capability can enhance your business outcomes are encouraged to reach out to your InRule Technology contact to learn more.
Not all bias detection is built the same
Unlike platforms that exclusively measure whether the distribution of data has changed over time, xAI Workbench bias detection evaluates the fairness of the model, ensuring people who are similar (on the basis of reasons most relevant to make the modeled decision) receive equal treatment.
Quantify and mitigate hazards
Bias detection in xAI Workbench delivers “fairness through awareness” and minimizes risk for organizations that leverage machine learning predictions at scale within business operations. Augmenting xAI Workbench with bias detection allows enterprises to quantify and mitigate potential hazards when complying with federal, state and local regulations or corporate policies.
Fairness through awareness
“Fairness through awareness” means an organization is empowered to evaluate their machine learning models for bias with all elements and data that are relevant to a prediction, even if those characteristics are not used to train the model itself. Conversely, “fairness through blindness” refers to selectively excluding elements and data from the modeling process. The risk with this blind approach is that it does not account for the potential correlation of remaining data to the variable excluded. And, it gives no transparent comparison to determine if the remaining proxy characteristics led to harmful bias even with the obvious element removed.
Please sign in to leave a comment.