What is Giving Compass?
We connect donors to learning resources and ways to support community-led solutions. Learn more about us.
Giving Compass' Take:
• Albert Fox Cahn reports on his participation in New York City's process to understand how automated decision systems are impacting citizens, however, he writes that the well-intentioned AI initiative went "horribly wrong."
• How can we establish more regulations around artificial intelligence as it becomes more prevalent and normalized in society? What other challenges are to be predicted?
• Here's why we should regulate AI to avert a cyber arms race.
When I stepped into the City Hall boardroom, it was filled with the nervous energy of the first day of school or a new job. But it was something far wonkier: the inaugural meeting of the New York City Automated Decision Systems (ADS) Task Force. Excitingly, this was the first task force in the country to comprehensively analyze the impact of artificial intelligence on government. Looking at everything from predictive policing, to school assignments, to trash pickup, the people in this room were going to decide what role AI should play and what safeguards we should have.
But that’s not what happened.
Flash forward 18 months and the end of the process couldn’t be more dissimilar from its start. The nervous energy had been replaced with exhaustion. Our optimism that we’d be able to provide an outline for the ways that the New York City government should be using automated decision systems gave way to a fatalistic belief that we may not be able to tackle a problem this big after all.
Read the full article about artificial intelligence regulations by Albert Fox Cahn at Fast Company.