Giving Compass' Take:
- Greta Byrum and Ruha Benjamin discuss the need to invest in technology justice that focuses on the communities most harmed by technology, namely communities of color.
- How can donors help center those most impacted by surveillance and address the biases built into our technologies?
- Read about technology, COVID-19, and social justice.
What is Giving Compass?
We connect donors to learning resources and ways to support community-led solutions. Learn more about us.
Consider the video camera outside your window. Does it give you a sense of safety or of being watched? The wearable tracker on your wrist—will it help you instill better health habits or sell your private information to insurers and ad-tech companies? Will your child’s online schooling help them connect with teachers and friends or expose you and your household to surveillance?
Virtually every new technology tied into the massive, interconnected web of data and machine power undergirding the global internet has the potential for both social benefit and social harm. And communities that have been overpoliced and surveilled are more likely than others to experience the negative capabilities of new technology. As Simone Browne demonstrated in her book Dark Matters, contemporary tech-enabled surveillance practices are an extension of the long history of policing Black life in the United States from slavery onward.
Yet even as vital critiques like Browne’s emerge, the trajectory of technological investment and development continues to produce powerful engines of prediction, decision-making, and tracking that governments and companies seamlessly apply in a social context. These become solutions in search of a problem, often applied to complex dilemmas surrounding social issues such as policing, criminal justice, health care, education, immigration, and social services.
Unfortunately, it’s becoming ever clearer that artificial intelligence and automated systems can create, reinforce, and deepen social injustice. This is because the predictions that machines make are not objective. Humans imagine, build, train, and deploy computational systems using flawed datasets that embed the historical bias of our social and political institutions. And the development of large computational models and infrastructure resources needed to sustain these systems creates an interconnected foundation of powerful, centralized surveillance systems with the potential to be exploited and abused.
Read the full article about tech justice by Greta Byrum and Ruha Benjamin at Stanford Social Innovation Review.