Giving Compass' Take:

• Will Byrne explains why AI is vulnerable to bias and how tech companies and organizations can work to combat AI bias. 

• Are you and/or your organization ready to push back against AI bias through one of these avenues? What partnerships could help you maximize your impact?

• Learn about charting a path for AI in education


The conventional wisdom, often peddled by Silicon Valley, is that when it comes to bias in decision-making, artificial intelligence is the great equalizer. On its face it makes sense: if we delegate complex decisions to AI, it becomes all about the math, cold calculations uncolored by the bias or prejudices we may hold as people.

As we’ve entered the infancy of the AI age, the fallacy in this thinking has revealed itself in some spectacular ways. Google’s first generation of visual AI identified images of people of African descent as gorillas. Voice command software in cars struggled to understand females. During the 2016 presidential election, Facebook’s algorithms spread fear-stoking lies to its most vulnerable users, allowing a foreign power to meaningfully swing the election of the most powerful office in the world.

As with any new technology, artificial intelligence reflects the bias of its creators. Societal bias is a stubborn problem that has stymied humans since the dawn of civilization. Our introduction of synthetic intelligence may be making it worse.

So what do we do?
  1. A first step is creating transparency standards, open-sourcing code and making AI less inscrutable.
  2. A new field called explainable AI has taken root, focused on creating AI systems of the future that can explain the reasoning of their decisions to human users.
  3. Most important will be achieving diversity of backgrounds in teams designing and architecting AI systems, across race, gender, culture, and socioeconomic background.

Read the full article on AI bias by Will Byrne at FastCompany.