Giving Compass' Take:
- Thomas Byrnes explores the ethical challenges of using AI in humanitarian contexts, discussing the tension between ensuring informed consent and providing life-saving aid.
- How can donors support the development of ethical AI solutions in the humanitarian sector, ensuring that vulnerable groups are afforded autonomy and able to provide informed consent?
- Learn more about best practices in philanthropy.
- Search our Guide to Good for nonprofits in your area.
What is Giving Compass?
We connect donors to learning resources and ways to support community-led solutions. Learn more about us.
Personally identifiable data in humanitarian contexts is like nuclear fuel – it has immense power to do good, enabling us to reach millions with life-saving aid, but it’s inherently dangerous. Like nuclear material, once it leaks, the damage can’t be undone. And when using AI in humanitarian contexts, we must operate under the assumption that eventually, it will leak.
Let’s use Yemen as an example. The crisis there presents a perfect storm:
- Dwindling humanitarian funding,
- Complex identification challenges across conflict lines,
- Millions in desperate need of immediate assistance.
Artificial intelligence could offer powerful solutions for ensuring fair and efficient aid distribution, yet we must confront an impossible ethical choice in a context where consent cannot be truly informed when using AI in humanitarian contexts:
Is consent ethically meaningful when the alternative is starvation, or do we compromise on our principles of informed consent to save lives?
Reality in Yemen: Too Many IDs
AI solutions are not currently being deployed in this way in Yemen, but the country’s situation provides a compelling case study for discussing these ethical dilemmas when using AI in humanitarian contexts.
In Yemen, humanitarian agencies must navigate a byzantine landscape of 26 different forms of functional IDs – issued by pre-war authorities, current local administrations, warring factions, and various administrative bodies. Each claims legitimacy, and many beneficiaries hold multiple, sometimes conflicting IDs.
Traditional matching methods collapse under the weight of different Arabic name variations, inconsistent household definitions, and programs operating across conflict lines. Add to this the substantial population lacking any formal ID, and you begin to understand why AI’s promise of efficient identification is so tempting – and so ethically complex.
Deduplicating Identification Overload When Using AI in Humanitarian Contexts
Humanitarian actors in Yemen face an impossible daily task: ensuring that the same person isn’t registered multiple times across various programs, especially when they might present different forms of ID each time is a major challenge and that traditional matching methods often fall short on.
Read the full article about using AI in humanitarian contexts by Thomas Byrnes at ICTworks.