Giving Compass' Take:

• Jenny R. Yang, writing for Urban Institute, explains how AI hiring technology reproduces racial bias from past decisions, offering suggestions for how to correct the issue.

• AI hiring technology is not inherently racist. How does its behavior reflect the bias of its designers and users? What can we do to push for equity in AI hiring technology to reduce generations of racist practice?

• Learn more about discrimination in AI hiring technology and how we can correct it.


As businesses begin to rehire after months of extraordinary job loss, artificial intelligence (AI)-driven hiring screens are becoming increasingly attractive to employers as an alternative to in-person interactions during the COVID-19 pandemic. To winnow down a flood of online job applications efficiently, major employers are using predictive hiring tools that screen and rank resumes, assess candidates through online games, and conduct video interviews that analyze applicants’ facial expressions.

Widely used practices such as subjective resume review and unstructured interviews enable stereotypical views and inaccurate assumptions to influence hiring decisions. And despite decades of researching showing resumes with African-American, Latinx or Asian-sounding names have been 24-36 percent less likely to receive interview requests than resumes with identical qualifications with white-sounding names, such practices are the foundation of many employers’ hiring processes.

Hiring assessment technology could help to expand the applicant pool by measuring abilities rather than relying on proxies for talent, such as a college degree, employee referrals, or recruitment from competitors, all of which may exclude qualified workers who have been historically underrepresented. By moving away from traditional criteria, employers could hire from a more diverse pool of high-performing candidates. Yet, simply disrupting the current system with technology will not advance equity. Hiring assessment technology systems reflect the choices of their developers, who may not detect bias in the data—a particularly acute concern given the lack of diversity in the AI field.

To realize the promise of new technology, we must ensure systems are carefully designed to prevent bias and to document and explain decisions necessary to evaluate their reliability and validity. Without adequate safeguards, algorithmic assessments can perpetuate patterns of systemic discrimination already present in the workforce.

Read the full article about AI hiring technology by Jenny R. Yang at Urban Institute.