Giving Compass' Take:

• Element AI provides insights from a range of experts on policy solutions for AI governance based in human rights. 

• How can funders work to implement these policy suggests? 

• Read about the need for ethical and legal guidelines in the digital age


As different approaches to governing artificial intelligence (AI) have struggled to build public trust, a number of scholars, international organizations and civil society advocates have put forward an alternative model: the international human rights law framework. Universal in scope and benefiting from global legitimacy and state adherence, its proponents argue that a human rights approach to governing AI— with its emphasis on law, rights, accountability and remedy—enjoys a clear value proposition.

In October 2019, Element AI partnered with the Mozilla Foundation and The Rockefeller Foundation to convene a workshop on the human rights approach to AI governance to determine what concrete actions could be taken in the short term to help ensure that respect for human rights is embedded into the design, development, and deployment of AI systems. Global experts from the fields of human rights, law, ethics, public policy, and technology participated. This report provides a summary of the workshop discussions and includes a list of recommendations that came out of the meeting.

The report recommends that governments should adopt a phased-approach to making human rights due diligence and human rights impact assessments a regulatory requirement in the public and private sectors, beginning with the development of model frameworks and sector-specific codes of conduct. The report recommends that industrial policy adopt a human rights approach, for instance through the creation of tailored direct spending programs to help ensure that the design and technological foundations of rights-respecting AI, such as transparency, explainability and accountability, are firmly established in key sectors.

The report also examines the potentially transformative role that a group of investors could play in shaping a new ecosystem of technology companies. Finally, the report recommends that governments implement a dedicated capacity building effort to accelerate understanding of how the existing legislative and regulatory framework can be applied to ensure respect for human rights, and identify potential gaps where adjustments may be necessary. This could be accomplished through the creation of an independent Centre of Expertise on AI, which could assume a range of new functions as a source of policy expertise, capacity building and oversight across government departments, regulatory agencies, industry, civil society, and international organizations.