Giving Compass' Take:
- Michal Kosinski highlights the potential of AI facial recognition to predict a person’s self-identified political ideology, which has concerning implications for privacy.
- What role can you play in protecting the public from the dangers of this type of technology?
- Learn about the problem of facial recognition in schools.
What is Giving Compass?
We connect donors to learning resources and ways to support community-led solutions. Learn more about us.
With facial recognition becoming more widespread, Michal Kosinski has concerns about the dangers of the technology and the controversies that come with it.
Kosinski’s research makes people uncomfortable. “As it should,” he says. “The privacy risks we uncover in our research should make anyone uncomfortable.”
In his most recent study, published earlier this year in Scientific Reports, Kosinski fed more than 1 million social media profile photos into a widely used facial recognition algorithm and found that it could correctly predict a person’s self-identified political ideology 72% of the time. In contrast, humans got it right 55% of the time.
Kosinski, an associate professor of organizational behavior at the Stanford University Graduate School of Business, does not see this as a breakthrough but rather a wake-up call. He hopes that his findings will alert people (and policymakers) to the misuse of this rapidly emerging technology.
Kosinski’s latest work builds on his 2018 paper in which he found that one of the most popular facial recognition algorithms, likely without its developers’ knowledge, could sort people based on their stated sexual orientation with startling accuracy.
“We were surprised—and scared—by the results,” he recalls. When they reran the experiment with different faces, “the results held up.”
That study sparked a firestorm. Kosinski’s critics says he was engaging in “AI phrenology” and enabling digital discrimination. He responded that his detractors were shooting the messenger for publicizing the invasive and nefarious uses of a technology that is already widespread but whose threats to privacy are still relatively poorly understood.
He admits that his approach presents a paradox: “Many people have not yet realized that this technology has a dangerous potential. By running studies of this kind and trying to quantify the dangerous potential of those technologies, I am, of course, informing the general public, journalists, politicians, and dictators that, ‘Hey, this off-the-shelf technology has these dangerous properties.’ And I fully recognize this challenge.”
Kosinski stresses that he does not develop any artificial intelligence tools; he’s a psychologist who wants to better understand existing technologies and their potential to be used for good or ill.
“Our lives are increasingly touched by the algorithms,” he says. Companies and governments are collecting our personal data wherever they can find it—and that includes the personal photos we publish online.
Read the full interview with Michal Kosinski about facial recognition at Futurity.