Imagine that you are deciding whether to release a person on bail, grant a consumer a loan, or hire a job candidate. Now imagine your method of making this decision involves using data to algorithmically predict how people will behave—who will skip bail, default on the loan, or be a good employee. How will you know if the way you determine outcomes is fair?
In recent years, computer scientists and others have done a lot to try to answer this question. The flourishing literature on “algorithmic fairness” offers dozens of possibilities, such as testing whether your algorithm predicts equally well for different people, comparing outcomes by race and sex, and assessing how often predictions are incorrect.
Continue Reading…