Intelligence (artificial or otherwise) Might Not Exist

Marianne Bellotti
9 min readMay 22, 2024

The dangerous game of judging the intelligence of machines when we can’t do it in people.

For irony sake, I generated this image with Chat GPT 😜

Today I was flipping through my feeds when I noticed some commentary on the collapse of OpenAI’s “superalignment” team. I’m skeptical of the concept of “alignment” in general. Values and objectives tend to vary wildly from culture to culture or even from person to person, and it always feels like the AI community thinks there’s going to just be one fix to align a system correctly. You get the impression from reading the think pieces that most of the startups in this space think that there’s one common set of values that they can start with then fill in the outliers and their differing values later. Well … we know the types of people who get left out of that story.

That being said, I would be pretty interested in the output of actual researchers, experimenting with actual approaches to aligning a statistical model with pretty much any set of values at all. I collect fun social science papers for technologists, and I’m always happy to read more.

But … on the other hand, every time I read a quote from someone in the AI space talking about building machines that are smarter than people I can’t help but think: by what metric?

--

--

Marianne Bellotti
Marianne Bellotti

Written by Marianne Bellotti

Author of Kill It with Fire Manage Aging Computer Systems (and Future Proof Modern Ones)

Responses (1)