Marianne Bellotti
1 min readJun 9, 2021

--

Antifragile is a term specifically from system theory. It refers to systems that become stronger when they fail. My argument is that AI that focuses on making decisions for people is more vulnerable to bad data and AI that focuses on helping people make decisions via differential diagnosis or triggering critical thinking is antifragile. Very early forays into computer assisted medical diagnosis were very very bad and yet they still resulted in better outcomes because they improved the doctors’ critical thinking and communication. We should lean into that more and stop obsessing about building machines that can beat people.

All of this is in the article which thousands of people read and understood and three people didn’t (one of whom later admitted he hadn’t read it) so……. I understand your frustrations and know it can be difficult to feel like your discipline is being attacked but I feel pretty good about the SLOs on the approach I took explaining it 😂

Incidentally the audience for explainability is other AI experts, not users. Users optimize for reducing cognitive load. They will misrepresent probabilities no matter how “explainable” it is. So this isn’t an appropriate safety solution, it’s a blame shifting exercise….. a very common one that predates AI by decades. Dekker has whole chapters on how this plays out.

--

--

Marianne Bellotti
Marianne Bellotti

Written by Marianne Bellotti

Author of Kill It with Fire Manage Aging Computer Systems (and Future Proof Modern Ones)

No responses yet