Using probabilistic classification to measure fairness for regression

18 February 2020

Plots showing the main results of the paper

Gradient Institute have released a paper (to be presented at the 2020 Ethics of Data Science Conference) studying the problem of how to create quantitative, mathematical representations of fairness that can be incorporated into AI systems to promote fair AI-driven decisions.

For discrete decisions (such as accepting or rejecting a loan application), there are well established ways to quantify fairness. However many decisions can lie in a continuous range, such as determining interest rates. There is a paucity of methods to quantify fairness for such continuous decisions, especially that require no assumptions about the underlying process. We propose new methods to quantify fairness for continuous decisions, thus allowing for the incorporation of fairness considerations into the design of AI systems used to set interest rates, risk scores, payment amounts or other decisions that lie in a continuous range.

You can find the draft paper on arxiv.