Fairness is a concept that sits at the heart of the work of many Civil Society Organizations (CSO); so the fact that it is also one of the key concerns about the implementation of AI should immediately suggest that civil society has something relevant to bring to the table. But using a word like “fair” potentially begs an awful lot of questions such as “fair to whom?” And “fair in what way?”

Luckily, academics have begun to dig into some of these issues and to parse the concept of fairness when it comes to Machine Learning (ML) systems. What this leads to is not a uniform notion of fairness, but a series of context-relevant sub-questions that can often be assessed in more practical ways.

For example, in the first instance before we have even started building an ML system, we need to ask some fundamental questions such as:

  • Is it fair to apply ML in this context at all?
  • Do the risks evidently outweigh any potential gains?
  • Have the people and communities that this system will affect been given an opportunity to voice any concerns?
  • Are there demographic or cultural considerations that should give us cause for concern?
  • Does the system inherently require data that could compromise the privacy or rights of certain individuals or groups?

This is a point at which civil society clearly has a role to play. CSOs will be able to bring relevant insights to human rights and civil liberties issues, and knowledge of marginalized groups and communities.

Read the full article about the role of civil society in AI by Rhodri Davies at Charities Aid Foundation.