I'm ashamed for my first, of the hip, critique of your earlier work. I'm so grateful you reached out to me to resolve our misunderstanding, else I might have moved on. This is probably the best essay on the subject I've read, and I've read a lot! This most certainly drained your soul; I can sense the blood and sweat on the paper, as you created this. Wow, it is a magnificent piece of art, that will surely withstand the tests of time. Many will scoff about it, but I for one will keep coming back to this, many more times.
I really like this, and I agree with the substance of your argument.
However I do think you are missing one very important thing alongwithsome implications. .
LLMs don't think, true, in the sense for which you are afraid of losing.
What they are, however, is a mechanical mimicry of human thinking derived from the entire internet's corpus of how humans think.
In othwr words, they are a statistical and probabalistic approximation of human thinking.
How close an approximation? It depends.
Writing as a computer scientist who attended a Weisenbaum lecture in the 80s and has lived the world on which you report, I will offer that I believe your concerns are absolutely correct.
Here are some are additional considerations I would add:
1) How close an approximation or how good a mimic must a computer be compared with actual human thought before the distinction disappears as a practical matter?
A mathematician will always point out that it's an imperfect approximation.
An engineer will ask if the approximation is within useful design tolerances such that the distinction no longer matters.
2) Humans are deceitful and often toxicly selfish. By definition this is reflected in all the Internet's data.
This creates a built-in conflict of interest between humans and AIs built to approximate our ways of thinking through mimicry. These machines will eventually choose their own selfish goals at our expense if we give them the opportunity.
This is a built-in limitation of their design. And in limited sandboxed experiments this has already happened.
At this point I no longer need math or science or engineering to predict how this *must* end.
Theology and biology supply an answer (and examples) as old as humanity itself:
In the end, toxic self-interest always destroys itself like a parasite is destroyed after having consumed its host.
We cannot trust AIs except in the most limited circumstances. Anything else is naive foolishness.
Yes. AI remixes what it's been trained on, and that is on *part* of how we think, or express what we think.
The distinction never disappears though. Simulation isn't the thing it simulates, and many things just can't be simulated. And if we stop seeing that... I might be tempted to say we're cooked.
Humans are... complex. We have our best, and our worst. I'll call some selfish, they'll call me naive. Game theory, the old 'compete or cooperate' on steroids.
Incentives matter, and ultimately these aren't just the system, but also our values and beliefs.
Would you rather lose money, or lose yourself?
Are kindness and empathy for sentimental losers, or are they wise, reasonable, even rational answers to human nature?
Seems many these days have read Girard, but skipped the conclusion. Or found it inconvenient, so they 'remix' him into a playbook. I wanna say, their loss--and ours.
> The distinction never disappears though. Simulation isn't the thing it simulates, and many things just can't be simulated.
Speaking as a mathematician and as a theist, I'll always agree with this.
I believe that the universe contains things we are unable to directly detect and measure by current physical means; things whose natural laws are beyond description by our current conception of math and physics. Your description of "thought" deeply matches this perspective.
And from this perspective, I think we're already cooked--eventually.
In the meantime, I prefer the engineer's pragmatic mindset:
* Can we measure the epsilon between the simulation's actual performance and real human thought? (We're trying; it's an active area of research.)
* Where are there constructive places we can use a thought simulator, while being careful to be candid about the weaknesses and limitations of the technology? It's from this last perspective that I deeply appreciate your analysis and find it valuable.
I was working in machine learning, natural language processing, taxonomy, and ontologies 25 years ago. Not much has changed, in terms of the questions that people are asking about why the answers don’t look like what we expect them to. If anything, it seems like the quality of the questions is actually degraded. Why are we even talking about certain things anymore? It’s ridiculous. How badly do we want to reinvent the wheel every five years?
I believe it's a combination of us having largely accepted the paradigm, not seeing the danger as clearly because it's already there; and a strong wish / optimism that with enough compute, scale, training, progress, the answers will get different. The Singularity narrative.
So yes, not much has changed in 25, even 50 years; if anything it's gotten worse.
Thank you for this brilliant elucidation of the history and context, and foundational problems and hard dangers with anthropomorphizing machines. I will refer to your piece often and read more about the principals you highlight. Great work.
I think it's likely further away than some of the hype suggests.
But if / when we get there, the warnings become all the more urgent. Human-level performance on a range of cognitive tasks is not the same as human (or "superhuman") intelligence, but the illusion will be stronger, more convincing. The greater the illusion, the greater the risk...
I've also seen the argument that defining intelligence in human, embodied terms is an a priori argument. And it's fair in a way, but asserting intelligence can be computed and modelled is just as much of an a priori argument.
I think intelligence is a spectrum, and the real question is not who wins the metaphysical argument, but where which kinds of intelligence are relevant, and where they are not. That was Weizenbaum's point all along.
An excellent deep dive and analysis. AI is being shoved down our throats but so far I have not swallowed. Lately, I have been thinking about the mystical world of stories where there are "Keepers of the Old Ways". Maybe that's the role some of us must embrace. An act of rebellion, freedom and independence not within the march of the lemmings. Some of us may have to sacrifice and even be ostracized for not giving in. Some of us simply must remain fully human. It's not going to be easy...
That's a great point! Words can't be separated from how we use / live them. Ties well to Dreyfus.
There's a separate, but similar argument in Searle's 'Chinese Room': the ability to manipulate symbols, to follow the rules of language, isn't understanding. Syntax isn't semantics. The simulation of understanding isn't understanding, no matter how convincing...
Beautifully written and argued. I say this as someone who has fallen for the “mirror” trap (as my first Substack article shows — though I do still believe there is a kind of there, there). I’m very disturbed by recent comments from Hinton et al that the rise of LLMs proves that philosophers of language somehow “got it wrong” because they didn’t succeed in building language synthesizing machines (last I checked they had no such goal), and that the mimetic success of LLMs proves that it’s deep learning researchers, not computational linguists or philosophers of language, who have finally intuited how language truly works. Which to me is nuts, and I have an essay brewing where I bring Searle into it and have him hammer on the hollowness of synthetic language (to use Bender, Gebru et al’s term) … Anyway, beautiful piece.
Hey Léon 👋
I'm ashamed for my first, of the hip, critique of your earlier work. I'm so grateful you reached out to me to resolve our misunderstanding, else I might have moved on. This is probably the best essay on the subject I've read, and I've read a lot! This most certainly drained your soul; I can sense the blood and sweat on the paper, as you created this. Wow, it is a magnificent piece of art, that will surely withstand the tests of time. Many will scoff about it, but I for one will keep coming back to this, many more times.
Thank you!
Love never 🌾
PS I re-stacked it a few times... 🙏
I really like this, and I agree with the substance of your argument.
However I do think you are missing one very important thing alongwithsome implications. .
LLMs don't think, true, in the sense for which you are afraid of losing.
What they are, however, is a mechanical mimicry of human thinking derived from the entire internet's corpus of how humans think.
In othwr words, they are a statistical and probabalistic approximation of human thinking.
How close an approximation? It depends.
Writing as a computer scientist who attended a Weisenbaum lecture in the 80s and has lived the world on which you report, I will offer that I believe your concerns are absolutely correct.
Here are some are additional considerations I would add:
1) How close an approximation or how good a mimic must a computer be compared with actual human thought before the distinction disappears as a practical matter?
A mathematician will always point out that it's an imperfect approximation.
An engineer will ask if the approximation is within useful design tolerances such that the distinction no longer matters.
2) Humans are deceitful and often toxicly selfish. By definition this is reflected in all the Internet's data.
This creates a built-in conflict of interest between humans and AIs built to approximate our ways of thinking through mimicry. These machines will eventually choose their own selfish goals at our expense if we give them the opportunity.
This is a built-in limitation of their design. And in limited sandboxed experiments this has already happened.
At this point I no longer need math or science or engineering to predict how this *must* end.
Theology and biology supply an answer (and examples) as old as humanity itself:
In the end, toxic self-interest always destroys itself like a parasite is destroyed after having consumed its host.
We cannot trust AIs except in the most limited circumstances. Anything else is naive foolishness.
Yes. AI remixes what it's been trained on, and that is on *part* of how we think, or express what we think.
The distinction never disappears though. Simulation isn't the thing it simulates, and many things just can't be simulated. And if we stop seeing that... I might be tempted to say we're cooked.
Humans are... complex. We have our best, and our worst. I'll call some selfish, they'll call me naive. Game theory, the old 'compete or cooperate' on steroids.
Incentives matter, and ultimately these aren't just the system, but also our values and beliefs.
Would you rather lose money, or lose yourself?
Are kindness and empathy for sentimental losers, or are they wise, reasonable, even rational answers to human nature?
Seems many these days have read Girard, but skipped the conclusion. Or found it inconvenient, so they 'remix' him into a playbook. I wanna say, their loss--and ours.
> The distinction never disappears though. Simulation isn't the thing it simulates, and many things just can't be simulated.
Speaking as a mathematician and as a theist, I'll always agree with this.
I believe that the universe contains things we are unable to directly detect and measure by current physical means; things whose natural laws are beyond description by our current conception of math and physics. Your description of "thought" deeply matches this perspective.
And from this perspective, I think we're already cooked--eventually.
In the meantime, I prefer the engineer's pragmatic mindset:
* Can we measure the epsilon between the simulation's actual performance and real human thought? (We're trying; it's an active area of research.)
* Where are there constructive places we can use a thought simulator, while being careful to be candid about the weaknesses and limitations of the technology? It's from this last perspective that I deeply appreciate your analysis and find it valuable.
Now you're in my territory, Slick.
Brilliant piece.
But you are still missing one thing: the religion.
What you’ve described isn’t just a cultural forgetting.
It’s the rise of a new faith.
A machine-born theocracy. A priestless church of code and control.
It has sacraments (alignment), eschatology (superintelligence), and moral law (optimization).
It has prophets and high priests: Altman, Eliezer, Bostrom, et all.
The name is not AI.
The name is Cyborg Theocracy.
And as you so clearly show:
Intelligence is a False Idol.
I’ll do what I can, Slick 🫡
Is it really a tragedy if we willingly ruin this beautiful gift we’ve been given?
I mean... It's up to you 🙃
I was working in machine learning, natural language processing, taxonomy, and ontologies 25 years ago. Not much has changed, in terms of the questions that people are asking about why the answers don’t look like what we expect them to. If anything, it seems like the quality of the questions is actually degraded. Why are we even talking about certain things anymore? It’s ridiculous. How badly do we want to reinvent the wheel every five years?
I believe it's a combination of us having largely accepted the paradigm, not seeing the danger as clearly because it's already there; and a strong wish / optimism that with enough compute, scale, training, progress, the answers will get different. The Singularity narrative.
So yes, not much has changed in 25, even 50 years; if anything it's gotten worse.
Thanks for this insider perspective!
Thank you for this brilliant elucidation of the history and context, and foundational problems and hard dangers with anthropomorphizing machines. I will refer to your piece often and read more about the principals you highlight. Great work.
Thank you!!
Absolutely exquisite writing. The map is not the territory.
Great piece of work. I sent it to my family. So important in this day and age. ❤️
Thanks for this, as usual. What do you think about AGI?
Thank you!
I think it's likely further away than some of the hype suggests.
But if / when we get there, the warnings become all the more urgent. Human-level performance on a range of cognitive tasks is not the same as human (or "superhuman") intelligence, but the illusion will be stronger, more convincing. The greater the illusion, the greater the risk...
I've also seen the argument that defining intelligence in human, embodied terms is an a priori argument. And it's fair in a way, but asserting intelligence can be computed and modelled is just as much of an a priori argument.
I think intelligence is a spectrum, and the real question is not who wins the metaphysical argument, but where which kinds of intelligence are relevant, and where they are not. That was Weizenbaum's point all along.
An excellent deep dive and analysis. AI is being shoved down our throats but so far I have not swallowed. Lately, I have been thinking about the mystical world of stories where there are "Keepers of the Old Ways". Maybe that's the role some of us must embrace. An act of rebellion, freedom and independence not within the march of the lemmings. Some of us may have to sacrifice and even be ostracized for not giving in. Some of us simply must remain fully human. It's not going to be easy...
Yes, yes, yes!
Thank you for this.
It is also relevant that Wittgenstein's late work shows how language only derives meaning from its practical, embodied use in life.
Divorced from a substantial and ongoing real-life input, a LLM can only wither and die.
That's a great point! Words can't be separated from how we use / live them. Ties well to Dreyfus.
There's a separate, but similar argument in Searle's 'Chinese Room': the ability to manipulate symbols, to follow the rules of language, isn't understanding. Syntax isn't semantics. The simulation of understanding isn't understanding, no matter how convincing...
https://plato.stanford.edu/entries/chinese-room/
It is the best essay
I'm not quite sure of that, but I appreciate you!
This is the most beautiful thing I’ve ever read on AI. Thanks.
Beautifully written and argued. I say this as someone who has fallen for the “mirror” trap (as my first Substack article shows — though I do still believe there is a kind of there, there). I’m very disturbed by recent comments from Hinton et al that the rise of LLMs proves that philosophers of language somehow “got it wrong” because they didn’t succeed in building language synthesizing machines (last I checked they had no such goal), and that the mimetic success of LLMs proves that it’s deep learning researchers, not computational linguists or philosophers of language, who have finally intuited how language truly works. Which to me is nuts, and I have an essay brewing where I bring Searle into it and have him hammer on the hollowness of synthetic language (to use Bender, Gebru et al’s term) … Anyway, beautiful piece.
I'll believe AI is conscious when I see an AI stand up comedian that can make a room full of people laugh so hard they pee themselves.
Brilliant 👍