As an industry being developed largely within the private, for-profit sector and with little regulation, the governance of artificial intelligence —the values, norms, policies, and safeguards that comprise industry standards—has been left in the hands of a relative few whose decisions have the potential to impact the lives of many, demonstrating the importance of nonprofits having a role in shaping AI governance.

And if this leadership lacks representation from the communities affected by automated decision-making, particularly marginalized communities, then the technology could be making the issue of inequity worse, not better.

So say various legal experts, executives, and nonprofit leaders who spoke with NPQ about the future of “AI governance” and the critical role nonprofits and advocacy groups can and must play to ensure AI reflects equity, and not exclusion.

A Lack of Oversight

The potential for AI to influence or even change society, in ways anticipated and not, is increasingly clear to scholars. Yet, these technologies are being developed in much the same way as conventional software platforms—rather than powerful, potentially dangerous technologies that require serious, considered governance and oversight.

Several experts who spoke to NPQ didn’t mince words about the lack of such governance and oversight in AI and the importance of intentionally and ethically shaping AI governance.

“There is no AI governance standard or law at the US federal government level,” said Jeff Le, managing principal at 100 Mile Strategies and a fellow at George Mason University’s National Security Institute. He is also a former deputy cabinet secretary for the State of California, where he led the cyber, AI, and emerging tech portfolios, among others.

While Le cited a few state laws, including the Colorado Artificial Intelligence Act and the Texas Data Privacy and Security Act, he noted that there are currently few consumer protections or privacy safeguards in place to prevent the misuse of personal data by large language models (LLMs).

Le also pointed to recent survey findings showing public support for more governance in AI, stating, “Constituents are deeply concerned about AI, including privacy, data, workforce, and society cohesion concerns.”

Read the full article about nonprofits shaping AI governance by Jennifer Johnson at Nonprofit Quarterly.