Sitemap
3 min readJun 9, 2021

I really appreciate the level of detail you’ve put into this response, but my point was not AI has no value or that we should stop developing it. Notice how the article opens with examples of large complex all encompassing systems? I’m pushing back against the assertion from the tech community that because AI works well in the small, low risk use cases you’ve listed in your response, that it can be successfully scaled up to large, complex, high risk use cases like weapons systems (which is what I work on). Yes, AI saves billions optimizing many different types of things…. but what is the negative impact if it makes a mistake on say …. a product recommendation? Almost nothing, the consumer maybe gets a laugh out of it. The negative impact is so negligible that we don’t even have a way to track it. We don’t build SLOs around AI reaching the wrong conclusion and we provide almost no room for the difference between correlation and causation, which is extremely dangerous.

I don’t think AI is useless, I think it is DESIGN DEPENDENT and we invest almost NOTHING in considering the different ways AI can be applied within a system — Okay, that’s not 100% true, Microsoft invests quite a lot in this, but I’ve seen little evidence anyone else does.

Every problem has a number of different potential solutions. Your successful examples are about optimization — Is optimization really a decision? I have this conversation at work a lot. There’s a big difference between using AI to improve targeting of a missile you’ve decide to fire and using AI to decide to fire the missile in the first place. If we see great results in targeting, should we actually expect to see great results in automating the firing? One is based on very clear-cut, easy to quantify control theory based calculations we’ve been using since before computers existed. The other is highly situational and based on criteria that key decision makers often cannot articulate let alone quantify.

I think we need to develop a more nuanced language around the difference between initiating an action and calculations to optimize existing actions, because by calling both of these things “decisions” we make it difficult for people like you and I to have conversations without someone getting outraged.

You assert that I don’t know much about AI, I will assert that you obviously know nothing about SAFETY. Whenever people dismissively throw around the term “human error”, people who have taken the time to learn about the study of safety as a science go “Uh-oh….” I think you would be surprised how much academic research there is in this space, and how deep the knowledge is. I recommend starting with Sidney Dekker’s work like a Field Guide to Understanding Human Error. And moving towards Charles Perrow’s work (Normal Accidents is the classic but it’s very loooooong)

In general the AI community needs to drop the attitude of “if people disagree it’s because they don’t understand our very difficult field that only the smartest people can understand” it makes you guys look silly and is not persuasive at all. Lisanne Bainbridge documented many of the things the AI community is discovering about workforce automation in 1983. 1983!!!!! Think of all the time and effort the community could save if rather than hiding from critique, bemoaning how only experts can understand you, you embraced the insights of other fields?

Marianne Bellotti
Marianne Bellotti

Written by Marianne Bellotti

Author of Kill It with Fire Manage Aging Computer Systems (and Future Proof Modern Ones)

No responses yet