Giving Compass' Take:
- Benjamin Boudreaux discusses the potential questions in ethics when it comes to emerging artificial intelligence usage.
- Artificial intelligence is thought to help in many ways and areas, how will people respond to potential negative effects?
- Learn more about the controversy surrounding artificial intelligence.
What is Giving Compass?
We connect donors to learning resources and ways to support community-led solutions. Learn more about us.
Members of Congress, the U.S. military, and prominent technologists have raised the alarm that the U.S. is at risk of losing an Artificial Intelligence (AI) arms race. China already has leveraged strategic investment and planning, access to massive data, and suspect business practices to surpass the U.S. in some aspects of AI implementation. There are worries that this competition could extend to the military sphere with serious consequences for U.S. national security.
During the prior Cold War arms race era, U.S. policymakers and the military expressed consternation about a so-called “missile gap” with the USSR that potentially gave the Soviets military superiority. Other “gaps” also infected strategic analysis and public discourse, including concerns about space gaps, bomber gaps, and so forth.
There is a need for clarity on the types of ethical and other risks that military artificial intelligence may pose.
Echoes of gap anxiety continue today. The perspective that the U.S. is in an AI arms race suggests another gap—an AI “ethics gap” in which the U.S. faces a higher ethical hurdle to develop and deploy AI in military contexts than its adversaries. As a result of this gap, the U.S. could be at a competitive disadvantage vis-à-vis countries with fewer scruples about AI.
Read the full article about artificial intelligence and ethics by Benjamin Boudreaux at RAND Corporation.