In Los Angeles, an algorithm helps decide who — out of 58,000 homeless people — gets access to a small amount of available housing. In Indiana, the state used a computer system to flag any mistake on an application for food stamps, healthcare, or cash benefits as a “failure to cooperate;” 1 million people lost benefits. In Pittsburgh, a child protection agency is using an algorithm to try to predict future child abuse, despite the algorithm’s problems with accuracy.

In a new book, Automating Inequality, Virginia Eubanks calls these examples of the digital poorhouse: tech-filled systems that come from a long history of cultural assumptions about what it means to be poor. In the 1800s, when actual, prison-like poorhouses were common, some politicians embraced the idea that people should only get assistance if they were willing to live in the poorhouse. The conditions were so bad that they thought it would discourage “undeserving” poor — who were seen as not working hard enough — from supposedly taking advantage of the system. By the late 1800s, the “scientific charity” movement started collecting data and opening investigative cases to decide who was deserving and undeserving.

New technology used in public services, Eubanks argues, comes out of the same old thinking. “It’s really important to understand that these tools are more evolution than revolution, even though we talk about them often as disruptors,” she says.

Read the full article about how algorithms create a "digital poorhouse" by Adele Peters at