Fix AI in the Private Sector by Fixing It in the Military First

Hear me out: Silicon Valley’s ethics via opt-out isn’t going to work anymore

Image created by freepik

What’s Wrong With AI

I wanted to write the War on the Rocks piece because I was sick of listening to people talk about “how much money would be saved” or “how much more accurate the system would be” once AI replaced humans in the military. Both of these assumption have a bad track record and little evidence to support them. Automation (of which AI is merely a sexy buzzword) does not make systems cheaper, safer, or more accurate. What it does is push the boundaries of what a system can support in terms of response time and complexity.

Why the Fix Will Come From the DoD: Purchasing Power

You may be thinking: “Okay, sure. But I don’t see why we can’t still fix these problems in the private sector and have that impact trickle down to the military.”

It Can Only Be Done Here

I’ve been interested in reimagining software engineering by incorporating safety science into our process for a while. Only recently have I come to realize that discussions about “ethical” and “safe” technology in academia and the private sector are largely irrelevant. When the DoD decides what “safety critical” and “ethical AI” is in this space, the weight of their purchasing power will wipe out all other voices.

Author of Kill It with Fire Manage Aging Computer Systems (and Future Proof Modern Ones)

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store