Giving Compass' Take:

• New Scientist reports on a project by police in the UK to use artificial intelligence (AI) in crime prediction, hoping to intervene with those at risk of committing a violent offense or becoming a victim.

• Aside from the technology involved, there are ethical dilemmas as well. Would this unfairly target certain groups? How can we be sure the power would be used wisely?

• Here's why AI needs to reflect society’s diversity.


Police in the UK want to predict serious violent crime using artificial intelligence, New Scientist can reveal. The idea is that individuals flagged by the system will be offered interventions, such as counseling, to avert potential criminal behavior.

However, one of the world’s leading data science institutes has expressed serious concerns about the project after seeing a redacted version of the proposals.

The system, called the National Data Analytics Solution (NDAS), uses a combination of AI and statistics to try to assess the risk of someone committing or becoming a victim of gun or knife crime, as well as the likelihood of someone falling victim to modern slavery.

West Midlands Police is leading the project and has until the end of March 2019 to produce a prototype. Eight other police forces, including London’s Metropolitan Police and Greater Manchester Police, are also involved. NDAS is being designed so that every police force in the UK could eventually use it.

Police funding has been cut significantly over recent years, so forces need a system that can look at all individuals already known to officers, with the aim of prioritizing those who need interventions most urgently, says Iain Donnelly, the police lead on the project.

Read the full article about AI stopping crime before it happens by Chris Baraniuk at New Scientist.