Many of us may remember Spring 2020 as a continuous loop of sifting through the firehose of new information about the pandemic on our Twitter feeds, Facebook feeds, or Whatsapp group chats. By Summer 2020, many were in the streets rising up against an epidemic of a different sort, of anti-Black police violence, and again processing collectively online. Other moments over the past year and a half are similarly marked by technology. To facilitate children’s distance learning experience, families downloaded new, unfamiliar softwares; some to connect kids with classmates, some to aid similarly home-bound educators to monitor the work of dozens of students. In a time of confusion and uncertainty, algorithms were one of many tools to determine allocation of critical resources like pandemic relief funding, healthcare supplies, and vaccines.

What is less visible is the system of power and decision-making that operates in the background: the online quota systems surveilling the Uber and Amazon drivers shuffling us and our products around; the insular tech company decisions on how to categorize and moderate false health and political information on social media; the surveillance society being constructed by police and national security departments that invades personal privacy and criminalizes whole communities; the world of government procurement of automated systems to ostensibly improve delivery of health or education services.

This report provides an overview of the current public conversation as it relates to the ongoing COVID-19 pandemic and algorithm-based artificial intelligence used in three interrelated domains that impact public health and social equity: the use of automated decision systems, surveillance, and social media.

Read the full document about technology and social justice at the Othering & Belonging Institute.