There is widespread agreement that responsible artificial intelligence requires principles such as fairness, transparency, privacy, human safety, and explainability. Nearly all ethicists and tech policy advocates stress these factors and push for algorithms that are fair, transparent, safe, and understandable.

But it is not always clear how to operationalize these broad principles or how to handle situations where there are conflicts between competing goals. It is not easy to move from the abstract to the concrete in developing algorithms and sometimes a focus on one goal comes at the detriment of alternative objectives.

In the criminal justice area, for example, Richard Berk and colleagues argue that there are many kinds of fairness and it is “impossible to maximize accuracy and fairness at the same time, and impossible simultaneously to satisfy all kinds of fairness.” While sobering, that assessment likely is on the mark and therefore must be part of our thinking on ways to resolve these tensions.

Algorithms also can be problematic because they are sensitive to small data shifts. Ke Yang and colleagues note this reality and say designers need to be careful in system development. Worrying, they point out that “small changes in the input data or in the ranking methodology may lead to drastic changes in the output, making the result uninformative and easy to manipulate.”

Read the full article about AI in the federal government by Darrell M. West at Brookings.