Giving Compass' Take:
- Jenny R. Yang discusses three ways that bias may enter AI-powered systems and lead to discrimination and three ways to prevent this from happening.
- AI-powered tools are transforming the lives of America’s workers, with profound implications for civil rights. How can donors help ensure AI is moving in the right direction?
- Here's an article on potential risks from advanced artificial intelligence.
What is Giving Compass?
We connect donors to learning resources and ways to support community-led solutions. Learn more about us.
In 2012, college engineering student Kyle Behm applied for a number of hourly jobs at retail stores. Behm had worked in similar positions, but the jobs he applied for required personality assessments. Kyle had been diagnosed with bipolar disorder, so questions about whether he experienced mood changes led many of the retailers to reject him even though he was well qualified.
Behm’s story illustrates the risks posed by a new generation of tools powered by artificial intelligence (AI) that are transforming the lives of America’s workers, with profound implications for civil rights.
Last week, I testified before the House Committee on Education and Labor Subcommittee on Civil Rights and Human Services to discuss how technology is changing work and how policymakers can address the new civil rights challenges raised by algorithmic hiring tools, worker surveillance, and tech-enabled business models that disrupt traditional employer-employee relationships.
Many new tech-driven hiring systems use AI to more quickly filter through increasing numbers of online applicants. Employers are using chatbots, résumé-screening tools, online assessments, web games, and video interviews to automate various stages of the hiring process.
Some employers aim to hire more quickly, assess “cultural fit,” or reduce turnover. Others aim to make better job-related decisions and hire more diverse candidates, expanding the applicant pool by measuring abilities rather than relying on traditional proxies for talent, such as graduation from an elite university, employee referrals, or recruiting from competitors. AI may be able to help employers identify workers who have been excluded from traditional pathways to success but have the skills necessary to succeed.
But with AI, machines work to replicate human decisionmaking. Often the bias in AI systems is the human behavior it emulates. When employers seek to simply automate and replicate their past hiring decisions, rather than hire based on a rigorous analysis of job-related criteria, this can perpetuate historic bias. Discriminatory criteria can be baked into algorithmic models and then rapidly scaled.
Read the full article about three ways artificial intelligence can discriminate and how to move forward by Jenny R. Yang at Urban Institute.