I'm ashamed for my first, of the hip, critique of your earlier work. I'm so grateful you reached out to me to resolve our misunderstanding, else I might have moved on. This is probably the best essay on the subject I've read, and I've read a lot! This most certainly drained your soul; I can sense the blood and sweat on the paper, as you created this. Wow, it is a magnificent piece of art, that will surely withstand the tests of time. Many will scoff about it, but I for one will keep coming back to this, many more times.
I really like this, and I agree with the substance of your argument.
However I do think you are missing one very important thing alongwithsome implications. .
LLMs don't think, true, in the sense for which you are afraid of losing.
What they are, however, is a mechanical mimicry of human thinking derived from the entire internet's corpus of how humans think.
In othwr words, they are a statistical and probabalistic approximation of human thinking.
How close an approximation? It depends.
Writing as a computer scientist who attended a Weisenbaum lecture in the 80s and has lived the world on which you report, I will offer that I believe your concerns are absolutely correct.
Here are some are additional considerations I would add:
1) How close an approximation or how good a mimic must a computer be compared with actual human thought before the distinction disappears as a practical matter?
A mathematician will always point out that it's an imperfect approximation.
An engineer will ask if the approximation is within useful design tolerances such that the distinction no longer matters.
2) Humans are deceitful and often toxicly selfish. By definition this is reflected in all the Internet's data.
This creates a built-in conflict of interest between humans and AIs built to approximate our ways of thinking through mimicry. These machines will eventually choose their own selfish goals at our expense if we give them the opportunity.
This is a built-in limitation of their design. And in limited sandboxed experiments this has already happened.
At this point I no longer need math or science or engineering to predict how this *must* end.
Theology and biology supply an answer (and examples) as old as humanity itself:
In the end, toxic self-interest always destroys itself like a parasite is destroyed after having consumed its host.
We cannot trust AIs except in the most limited circumstances. Anything else is naive foolishness.
Yes. AI remixes what it's been trained on, and that is on *part* of how we think, or express what we think.
The distinction never disappears though. Simulation isn't the thing it simulates, and many things just can't be simulated. And if we stop seeing that... I might be tempted to say we're cooked.
Humans are... complex. We have our best, and our worst. I'll call some selfish, they'll call me naive. Game theory, the old 'compete or cooperate' on steroids.
Incentives matter, and ultimately these aren't just the system, but also our values and beliefs.
Would you rather lose money, or lose yourself?
Are kindness and empathy for sentimental losers, or are they wise, reasonable, even rational answers to human nature?
Seems many these days have read Girard, but skipped the conclusion. Or found it inconvenient, so they 'remix' him into a playbook. I wanna say, their loss--and ours.
> The distinction never disappears though. Simulation isn't the thing it simulates, and many things just can't be simulated.
Speaking as a mathematician and as a theist, I'll always agree with this.
I believe that the universe contains things we are unable to directly detect and measure by current physical means; things whose natural laws are beyond description by our current conception of math and physics. Your description of "thought" deeply matches this perspective.
And from this perspective, I think we're already cooked--eventually.
In the meantime, I prefer the engineer's pragmatic mindset:
* Can we measure the epsilon between the simulation's actual performance and real human thought? (We're trying; it's an active area of research.)
* Where are there constructive places we can use a thought simulator, while being careful to be candid about the weaknesses and limitations of the technology? It's from this last perspective that I deeply appreciate your analysis and find it valuable.
I was working in machine learning, natural language processing, taxonomy, and ontologies 25 years ago. Not much has changed, in terms of the questions that people are asking about why the answers don’t look like what we expect them to. If anything, it seems like the quality of the questions is actually degraded. Why are we even talking about certain things anymore? It’s ridiculous. How badly do we want to reinvent the wheel every five years?
I believe it's a combination of us having largely accepted the paradigm, not seeing the danger as clearly because it's already there; and a strong wish / optimism that with enough compute, scale, training, progress, the answers will get different. The Singularity narrative.
So yes, not much has changed in 25, even 50 years; if anything it's gotten worse.
Thank you for this brilliant elucidation of the history and context, and foundational problems and hard dangers with anthropomorphizing machines. I will refer to your piece often and read more about the principals you highlight. Great work.
I think it's likely further away than some of the hype suggests.
But if / when we get there, the warnings become all the more urgent. Human-level performance on a range of cognitive tasks is not the same as human (or "superhuman") intelligence, but the illusion will be stronger, more convincing. The greater the illusion, the greater the risk...
I've also seen the argument that defining intelligence in human, embodied terms is an a priori argument. And it's fair in a way, but asserting intelligence can be computed and modelled is just as much of an a priori argument.
I think intelligence is a spectrum, and the real question is not who wins the metaphysical argument, but where which kinds of intelligence are relevant, and where they are not. That was Weizenbaum's point all along.
An excellent deep dive and analysis. AI is being shoved down our throats but so far I have not swallowed. Lately, I have been thinking about the mystical world of stories where there are "Keepers of the Old Ways". Maybe that's the role some of us must embrace. An act of rebellion, freedom and independence not within the march of the lemmings. Some of us may have to sacrifice and even be ostracized for not giving in. Some of us simply must remain fully human. It's not going to be easy...
That's a great point! Words can't be separated from how we use / live them. Ties well to Dreyfus.
There's a separate, but similar argument in Searle's 'Chinese Room': the ability to manipulate symbols, to follow the rules of language, isn't understanding. Syntax isn't semantics. The simulation of understanding isn't understanding, no matter how convincing...
Could we consider that the paradigm and model for education and work have essentially flattened us and that this moment could lead to an expansive period through a potential division of cognitive labor that AI may offer?
The former, yes; and I believe, a consequence of modernity's mechanistic, instrumentalist paradigm. The world is a wheel and we are cogs. Though that, too, is a flattening simplification.
The latter, well... We can always consider it. What would it take though?
I’ve liked some of your posts. I think people need to be deeply aware of the dangers of AI and, moreso, those who build-own-control it.
But I’ll be frank. This was a real slog of an essay. The arguments in parts 1-4 are largely weak if not strawmen. In order to address the situation, we have to be deeply honest about it. We have to apply, as you say, “embodied knowledge” which deals with reality, not just abstractions that can be affectively presented to bolster our opinions and confirm our attitudes.
Part V has some real juice, but not much gets squeezed.
The rest is the typical empty platitudes about “shifting the culture” and holistic renaissance which are great for clicks or getting people to sign up for a lame workshops. “How to survive the coming AI Apocalypse. 6 recorded lessons for only $200. It’s a good deal, Slick. Take it.”
Tell a real story. Inspire people. Give *practical* advise. Otherwise, it’s just people nodding along to critiques of the Titanic’s crew and path while we move closer to the iceberg.
Where’s the embodied intelligence and connection you preach about?
Also, I always assumed your stuff was at least semi-AI-generated. Not so? It has a ChatGPT smell.
More seriously, I like your honesty, and let me return the favour: I find the critique a bit shallow.
What makes you say the arguments are weak or straw-man? Which part of the thinking do you find distorted or flattened? Which part(s) do you disagree with, or think I have gotten wrong, maybe misrepresented?
That sentience and intelligence may exist on a spectrum, but we should never mistake the machine mind for ours, anthropomorphize ir, or trust it with some decisions?
That more scale, more compute, even attempts to make the machine more human will not make it human-like?
That intelligence is more than computation?
That AI reflects a flattened view of the mind, shaped left-brain dominance and thinking in closed systems?
That it's distorting our mind, re-wiring us for superficial connection like other tech before it, that we risk atrophy, that the dynamic is accelerating and worsening?
One essay (and it’s, as you point out, already long) cannot do everything, and obviously not please everyone. This one is more reflective than narrative, but there is practical advice aplenty; a whole section, actually.
Could it go further? Always. Would the piece be longer? Also yes.
It doesn’t provide a comprehensive program for culture change, though I think it would take more than a conclusion to an already long piece. But it points to dangers I believe are under-recognized, and to what might still be within our reach.
I don’t promise to single-handedly shift culture; I’m quite explicit in saying this is not simply a Butlerian Jihad answer, nor a magic wand or a silver bullet. I call for resisting flattening of Being and relation, yes, and quite humbly. And for the record, I’m also not pitching a workshop. (Unless you really want it? *Last chance!*)
If there is one takeaway from this piece, it’s that we risk being flattened, adopting and adapting to a vision of ourselves that is impoverished by the machine’s limitations.
Something that I think is often overlooked, just like Weizenbaum, Dreyfus, and others were.
I do use AI sometimes. As a research assistant--but it’s eager to please and ready to lie, so, with a grain of salt and robust fact-checks. I’ve tried using it as an editor but there I’ve come to trust it even less: the results are neat but dry and empty (unless you really like bullet points).
Absolutely not as a substitute for writing and thinking. And not as an illustrator: the way image generation models are trained and used crosses a line for me, appropriating and displacing human creative work.
I don’t think it’s about becoming a purist, but I do find myself getting closer and closer to that. I get much more value from reading books, even though they can be long, expensive, even hard to find; and from real conversation with real, live people. That’s embodied knowledge for ya.
If you can generate something remotely close to this--or any one of my essays--with a few clever prompts, I’ll be equally alarmed and impressed…
Hey Léon 👋
I'm ashamed for my first, of the hip, critique of your earlier work. I'm so grateful you reached out to me to resolve our misunderstanding, else I might have moved on. This is probably the best essay on the subject I've read, and I've read a lot! This most certainly drained your soul; I can sense the blood and sweat on the paper, as you created this. Wow, it is a magnificent piece of art, that will surely withstand the tests of time. Many will scoff about it, but I for one will keep coming back to this, many more times.
Thank you!
Love never 🌾
PS I re-stacked it a few times... 🙏
I really like this, and I agree with the substance of your argument.
However I do think you are missing one very important thing alongwithsome implications. .
LLMs don't think, true, in the sense for which you are afraid of losing.
What they are, however, is a mechanical mimicry of human thinking derived from the entire internet's corpus of how humans think.
In othwr words, they are a statistical and probabalistic approximation of human thinking.
How close an approximation? It depends.
Writing as a computer scientist who attended a Weisenbaum lecture in the 80s and has lived the world on which you report, I will offer that I believe your concerns are absolutely correct.
Here are some are additional considerations I would add:
1) How close an approximation or how good a mimic must a computer be compared with actual human thought before the distinction disappears as a practical matter?
A mathematician will always point out that it's an imperfect approximation.
An engineer will ask if the approximation is within useful design tolerances such that the distinction no longer matters.
2) Humans are deceitful and often toxicly selfish. By definition this is reflected in all the Internet's data.
This creates a built-in conflict of interest between humans and AIs built to approximate our ways of thinking through mimicry. These machines will eventually choose their own selfish goals at our expense if we give them the opportunity.
This is a built-in limitation of their design. And in limited sandboxed experiments this has already happened.
At this point I no longer need math or science or engineering to predict how this *must* end.
Theology and biology supply an answer (and examples) as old as humanity itself:
In the end, toxic self-interest always destroys itself like a parasite is destroyed after having consumed its host.
We cannot trust AIs except in the most limited circumstances. Anything else is naive foolishness.
Yes. AI remixes what it's been trained on, and that is on *part* of how we think, or express what we think.
The distinction never disappears though. Simulation isn't the thing it simulates, and many things just can't be simulated. And if we stop seeing that... I might be tempted to say we're cooked.
Humans are... complex. We have our best, and our worst. I'll call some selfish, they'll call me naive. Game theory, the old 'compete or cooperate' on steroids.
Incentives matter, and ultimately these aren't just the system, but also our values and beliefs.
Would you rather lose money, or lose yourself?
Are kindness and empathy for sentimental losers, or are they wise, reasonable, even rational answers to human nature?
Seems many these days have read Girard, but skipped the conclusion. Or found it inconvenient, so they 'remix' him into a playbook. I wanna say, their loss--and ours.
> The distinction never disappears though. Simulation isn't the thing it simulates, and many things just can't be simulated.
Speaking as a mathematician and as a theist, I'll always agree with this.
I believe that the universe contains things we are unable to directly detect and measure by current physical means; things whose natural laws are beyond description by our current conception of math and physics. Your description of "thought" deeply matches this perspective.
And from this perspective, I think we're already cooked--eventually.
In the meantime, I prefer the engineer's pragmatic mindset:
* Can we measure the epsilon between the simulation's actual performance and real human thought? (We're trying; it's an active area of research.)
* Where are there constructive places we can use a thought simulator, while being careful to be candid about the weaknesses and limitations of the technology? It's from this last perspective that I deeply appreciate your analysis and find it valuable.
Now you're in my territory, Slick.
Brilliant piece.
But you are still missing one thing: the religion.
What you’ve described isn’t just a cultural forgetting.
It’s the rise of a new faith.
A machine-born theocracy. A priestless church of code and control.
It has sacraments (alignment), eschatology (superintelligence), and moral law (optimization).
It has prophets and high priests: Altman, Eliezer, Bostrom, et all.
The name is not AI.
The name is Cyborg Theocracy.
And as you so clearly show:
Intelligence is a False Idol.
I’ll do what I can, Slick 🫡
Is it really a tragedy if we willingly ruin this beautiful gift we’ve been given?
I mean... It's up to you 🙃
I was working in machine learning, natural language processing, taxonomy, and ontologies 25 years ago. Not much has changed, in terms of the questions that people are asking about why the answers don’t look like what we expect them to. If anything, it seems like the quality of the questions is actually degraded. Why are we even talking about certain things anymore? It’s ridiculous. How badly do we want to reinvent the wheel every five years?
I believe it's a combination of us having largely accepted the paradigm, not seeing the danger as clearly because it's already there; and a strong wish / optimism that with enough compute, scale, training, progress, the answers will get different. The Singularity narrative.
So yes, not much has changed in 25, even 50 years; if anything it's gotten worse.
Thanks for this insider perspective!
Thank you for this brilliant elucidation of the history and context, and foundational problems and hard dangers with anthropomorphizing machines. I will refer to your piece often and read more about the principals you highlight. Great work.
Thank you!!
Absolutely exquisite writing. The map is not the territory.
Great piece of work. I sent it to my family. So important in this day and age. ❤️
Thanks for this, as usual. What do you think about AGI?
Thank you!
I think it's likely further away than some of the hype suggests.
But if / when we get there, the warnings become all the more urgent. Human-level performance on a range of cognitive tasks is not the same as human (or "superhuman") intelligence, but the illusion will be stronger, more convincing. The greater the illusion, the greater the risk...
I've also seen the argument that defining intelligence in human, embodied terms is an a priori argument. And it's fair in a way, but asserting intelligence can be computed and modelled is just as much of an a priori argument.
I think intelligence is a spectrum, and the real question is not who wins the metaphysical argument, but where which kinds of intelligence are relevant, and where they are not. That was Weizenbaum's point all along.
An excellent deep dive and analysis. AI is being shoved down our throats but so far I have not swallowed. Lately, I have been thinking about the mystical world of stories where there are "Keepers of the Old Ways". Maybe that's the role some of us must embrace. An act of rebellion, freedom and independence not within the march of the lemmings. Some of us may have to sacrifice and even be ostracized for not giving in. Some of us simply must remain fully human. It's not going to be easy...
Yes, yes, yes!
Thank you for this.
It is also relevant that Wittgenstein's late work shows how language only derives meaning from its practical, embodied use in life.
Divorced from a substantial and ongoing real-life input, a LLM can only wither and die.
That's a great point! Words can't be separated from how we use / live them. Ties well to Dreyfus.
There's a separate, but similar argument in Searle's 'Chinese Room': the ability to manipulate symbols, to follow the rules of language, isn't understanding. Syntax isn't semantics. The simulation of understanding isn't understanding, no matter how convincing...
https://plato.stanford.edu/entries/chinese-room/
It is the best essay
I'm not quite sure of that, but I appreciate you!
I'll believe AI is conscious when I see an AI stand up comedian that can make a room full of people laugh so hard they pee themselves.
Brilliant 👍
Could we consider that the paradigm and model for education and work have essentially flattened us and that this moment could lead to an expansive period through a potential division of cognitive labor that AI may offer?
The former, yes; and I believe, a consequence of modernity's mechanistic, instrumentalist paradigm. The world is a wheel and we are cogs. Though that, too, is a flattening simplification.
The latter, well... We can always consider it. What would it take though?
I’ve liked some of your posts. I think people need to be deeply aware of the dangers of AI and, moreso, those who build-own-control it.
But I’ll be frank. This was a real slog of an essay. The arguments in parts 1-4 are largely weak if not strawmen. In order to address the situation, we have to be deeply honest about it. We have to apply, as you say, “embodied knowledge” which deals with reality, not just abstractions that can be affectively presented to bolster our opinions and confirm our attitudes.
Part V has some real juice, but not much gets squeezed.
The rest is the typical empty platitudes about “shifting the culture” and holistic renaissance which are great for clicks or getting people to sign up for a lame workshops. “How to survive the coming AI Apocalypse. 6 recorded lessons for only $200. It’s a good deal, Slick. Take it.”
Tell a real story. Inspire people. Give *practical* advise. Otherwise, it’s just people nodding along to critiques of the Titanic’s crew and path while we move closer to the iceberg.
Where’s the embodied intelligence and connection you preach about?
Also, I always assumed your stuff was at least semi-AI-generated. Not so? It has a ChatGPT smell.
So… I take it you *won’t* buy my workshop?
More seriously, I like your honesty, and let me return the favour: I find the critique a bit shallow.
What makes you say the arguments are weak or straw-man? Which part of the thinking do you find distorted or flattened? Which part(s) do you disagree with, or think I have gotten wrong, maybe misrepresented?
That sentience and intelligence may exist on a spectrum, but we should never mistake the machine mind for ours, anthropomorphize ir, or trust it with some decisions?
That more scale, more compute, even attempts to make the machine more human will not make it human-like?
That intelligence is more than computation?
That AI reflects a flattened view of the mind, shaped left-brain dominance and thinking in closed systems?
That it's distorting our mind, re-wiring us for superficial connection like other tech before it, that we risk atrophy, that the dynamic is accelerating and worsening?
One essay (and it’s, as you point out, already long) cannot do everything, and obviously not please everyone. This one is more reflective than narrative, but there is practical advice aplenty; a whole section, actually.
Could it go further? Always. Would the piece be longer? Also yes.
It doesn’t provide a comprehensive program for culture change, though I think it would take more than a conclusion to an already long piece. But it points to dangers I believe are under-recognized, and to what might still be within our reach.
I don’t promise to single-handedly shift culture; I’m quite explicit in saying this is not simply a Butlerian Jihad answer, nor a magic wand or a silver bullet. I call for resisting flattening of Being and relation, yes, and quite humbly. And for the record, I’m also not pitching a workshop. (Unless you really want it? *Last chance!*)
If there is one takeaway from this piece, it’s that we risk being flattened, adopting and adapting to a vision of ourselves that is impoverished by the machine’s limitations.
Something that I think is often overlooked, just like Weizenbaum, Dreyfus, and others were.
I do use AI sometimes. As a research assistant--but it’s eager to please and ready to lie, so, with a grain of salt and robust fact-checks. I’ve tried using it as an editor but there I’ve come to trust it even less: the results are neat but dry and empty (unless you really like bullet points).
Absolutely not as a substitute for writing and thinking. And not as an illustrator: the way image generation models are trained and used crosses a line for me, appropriating and displacing human creative work.
I don’t think it’s about becoming a purist, but I do find myself getting closer and closer to that. I get much more value from reading books, even though they can be long, expensive, even hard to find; and from real conversation with real, live people. That’s embodied knowledge for ya.
If you can generate something remotely close to this--or any one of my essays--with a few clever prompts, I’ll be equally alarmed and impressed…