There is solid basis for concern when setting sights on a dignified labor sector in the future from the vantage point of current indignities suffered by many workers. The impact of technology in labor has a long history in economic analysis. The rise of automated decision making (ADM) has breathed fresh life into the area of concern so much so that a formal area of inquiry has emerged: The Future of Work. “[A] dignified future for labor demands responsive technologies and a collective share in the benefits of automation.”[1] This is the basis of charting a future for a continued struggle for dignified labor in the advent of automated decision making.

It is possible that the ADM technologies will create unprecedented impacts on labor that set it apart from technological innovations of the past. Kate Crawford points out that the growth of ADM is not inevitable, that it is a policy choice.[2] Marianna Mazzucato demonstrates the ways that the economy—including a tech-based economy—can be governed differently such that greater social value is created.[3] A large part of what makes the case for ADM technology to have an unusual impact on labor is not the technology itself, but rather the economic and physical landscape on which it unfolds. This would suggest that the concerns around ADM are not technical ones. The problem of the future of work doesn’t revolve around debugging a line of code or troubleshooting software.[4] Rather, the future of work is entangled with what Nancy Folbre describes as “social software” and “algorithms we use to allocate labor.”[5]

The global scale of current indignities of labor are demonstrated in the international division of labor, uneven development managed by colonial relationships and international trade agreements, unprecedented levels of inequalities in wealth, income, and political power.[6] These multiple battlegrounds of power and labor are rooted in historic labor relationships.

Most notably the chattel slavery system in the US and the violent exploitation of labor associated with the colonial period and its legacy. Latoya Peterson writes that racism is an API. “…[O]ppression operates in the same formats, runs the same scripts over and over. It is tweaked to be context specific, but it’s all the same source code. And the key to its undoing is recognizing how many of us are ensnared in these same basic patterns and modifying our own actions.”[7]

Ruha Benjamin suggests that race itself is another kind of technology and it frames emerging technologies hide, speed up, or reinforce racism ushered into society beyond the realms of technology.  Simone Brown links the way ADM surveillance technologies are directly inherited from the control of enslaved people in the US. The many reports of racialized surveillance with ADM and encoded bias are rooted in these deeply racialized practices. Ruha Benjamin discusses the field of race critical code studies and the “Black box of coded inequity” as the site of many racialized applications of ADM design.[8] With this analysis in mind, it is not at all coincidental that Artificial Intelligence (AI) was brought into the world by a 2-month study conducted by 10 white men in the summer of 1956 at Dartmouth College. 

Read the full article about the future of work by Wendy Ake at Othering & Belonging Institute.