Giving Compass' Take:
- Edmund L. Andrews explains how “noisy” data can produce disadvantages for low-income and BIPOC credit borrowers, affecting their ability to buy a home.
- How does using artificial intelligence to calculate credit risk for disadvantaged communities perpetuate deep-rooted inequities?
- Learn about the benefits of adopting alternative data in credit scoring.
What is Giving Compass?
We connect donors to learning resources and ways to support community-led solutions. Learn more about us.
Search our Guide to Good
Start searching for your way to change the world.
“Noisy” data can trip up artificial intelligence tools that calculate credit risk, leading to disadvantages for low-income and minority borrowers, research finds.
For aspiring home buyers, getting a mortgage often comes down to one talismanic number: the credit score.
“We’re working with data that’s flawed for all sorts of historical reasons.”
Banks and other lenders are turning to artificial intelligence to develop increasingly sophisticated models for scoring credit risk. But even though credit-scoring companies are legally prohibited from considering factors like race or ethnicity, critics have long worried that the models contain hidden biases against disadvantaged communities, limiting their access to credit.
Now a preprint study in which researchers used artificial intelligence to test alternative credit-scoring models finds that there is indeed a problem for lower-income families and minority borrowers: The predictive tools are between 5 and 10% less accurate for these groups than for higher-income and non-minority groups.
It’s not that the credit score algorithms themselves are biased against disadvantaged borrowers. Rather, it’s that the underlying data is less accurate in predicting creditworthiness for those groups, often because those borrowers have limited credit histories.
A “thin” credit history will in itself lower a person’s score, because lenders prefer more data than less. But it also means that one or two small dings, such as a delinquent payment many years in the past, can cause outsized damage to a person’s score.
“We’re working with data that’s flawed for all sorts of historical reasons,” says Laura Blattner, an assistant professor of finance at the Stanford University Graduate School of Business, who is coauthor of the new study with Scott Nelson of the University of Chicago Booth School of Business.
Read the full article about ‘noisy’ data and credit inequity by Edmund L. Andrews at Futurity.