In Wellcome Data Labs we are developing a new method of applying approaches from the social sciences to the way AI algorithms are produced to solve data science problems. The goal is to avoid potential negative consequences of the algorithms by identifying them earlier in the development process.

In Wellcome Data Labs we have worked out a paired approach to Agile ethics which is intended to resolve this issue. Our proposed methodology has three steps:

  1. Embedding in Data Labs a user researcher with a background both in working as part of Agile product teams and in carrying out social sciences research. This embedded researcher will have the explicitly defined objective of testing the algorithmic models the software developers and data scientists are working on from the point of view of their possible social impact.
  2. They will adjust and develop their analysis iteratively to match the speed of the technology work and feed their emergent conclusions back to the data scientists to steer the course of their work.
  3. The embedded researcher will be paired up with another social scientist outside the team to provide an objective critique and the necessary checks and balances on their analysis.

All three parts of the proposed methodology are equally important.

  • Not embedding the researcher in the team would make it hard for them to have a close enough knowledge of what the data scientists are doing.
  • Not iteratively retesting and rewriting their analysis of possible social impact will fail to match the rhythm of the technological development – the key proposed advantage of this methodology.
  • Finally, the pairing is designed to prevent the embedded researcher risking a loss of their professional detachment and objectivity, which is a risk precisely because they are so closely embedded within the technology teams.

This whole approach is an experiment in itself and we are not at all certain that it will work. However, that is exactly what makes it exciting to us. We hope it will help us become better aware of the biases being introduced by the algorithms that we develop and minimize any potential negative unintentional consequences of the tools the team produces.

Read the full article about ethical data science at Wellcome.