Discussion about this post

User's avatar
A Horseman in Shangri-La's avatar

Hey Léon 👋

I'm ashamed for my first, of the hip, critique of your earlier work. I'm so grateful you reached out to me to resolve our misunderstanding, else I might have moved on. This is probably the best essay on the subject I've read, and I've read a lot! This most certainly drained your soul; I can sense the blood and sweat on the paper, as you created this. Wow, it is a magnificent piece of art, that will surely withstand the tests of time. Many will scoff about it, but I for one will keep coming back to this, many more times.

Thank you!

Love never 🌾

PS I re-stacked it a few times... 🙏

Expand full comment
David Orme's avatar

I really like this, and I agree with the substance of your argument.

However I do think you are missing one very important thing alongwithsome implications. .

LLMs don't think, true, in the sense for which you are afraid of losing.

What they are, however, is a mechanical mimicry of human thinking derived from the entire internet's corpus of how humans think.

In othwr words, they are a statistical and probabalistic approximation of human thinking.

How close an approximation? It depends.

Writing as a computer scientist who attended a Weisenbaum lecture in the 80s and has lived the world on which you report, I will offer that I believe your concerns are absolutely correct.

Here are some are additional considerations I would add:

1) How close an approximation or how good a mimic must a computer be compared with actual human thought before the distinction disappears as a practical matter?

A mathematician will always point out that it's an imperfect approximation.

An engineer will ask if the approximation is within useful design tolerances such that the distinction no longer matters.

2) Humans are deceitful and often toxicly selfish. By definition this is reflected in all the Internet's data.

This creates a built-in conflict of interest between humans and AIs built to approximate our ways of thinking through mimicry. These machines will eventually choose their own selfish goals at our expense if we give them the opportunity.

This is a built-in limitation of their design. And in limited sandboxed experiments this has already happened.

At this point I no longer need math or science or engineering to predict how this *must* end.

Theology and biology supply an answer (and examples) as old as humanity itself:

In the end, toxic self-interest always destroys itself like a parasite is destroyed after having consumed its host.

We cannot trust AIs except in the most limited circumstances. Anything else is naive foolishness.

Expand full comment
24 more comments...

No posts