We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.
But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.
This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.
So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.
Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).
Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.
Good luck. Even David Attenborrough can’t help but anthropomorphize. People will feel sorry for a picture of a dot separated from a cluster of other dots. The play by AI companies is that it’s human nature for us to want to give just about every damn thing human qualities. I’d explain more but as I write this my smoke alarm is beeping a low battery warning, and I need to go put the poor dear out of its misery.
I think we should start by not following this marketing speak. The sentence “AI isn’t intelligent” makes no sense. What we mean is “LLMs aren’t intelligent”.
So couldn’t we say LLM’s aren’t really AI? Cuz that’s what I’ve seen to come to terms with.
To be fair, the term “AI” has always been used in an extremely vague way.
NPCs in video games, chess computers, or other such tech are not sentient and do not have general intelligence, yet we’ve been referring to those as “AI” for decades without anybody taking an issue with it.
I don’t think the term AI has been used in a vague way, it’s that there’s a huge disconnect between how the technical fields use it vs general populace and marketing groups heavily abuse that disconnect.
Artificial has two meanings/use cases. One is to indicate something is fake (video game NPC, chess bots, vegan cheese). The end product looks close enough to the real thing that for its intended use case it works well enough. Looks like a duck, quacks like a duck, treat it like a duck even though we all know it’s a bunny with a costume on. LLMs on a technical level fit this definition.
The other definition is man made. Artificial diamonds are a great example of this, they’re still diamonds at the end of the day, they have all the same chemical makeups, same chemical and physical properties. The only difference is they came from a laboratory made by adult workers vs child slave labor.
My pet theory is science fiction got the general populace to think of artificial intelligence to be using the “man-made” definition instead of the “fake” definition that these companies are using. In the past the subtle nuance never caused a problem so we all just kinda ignored it
Dafuq? Artificial always means man-made.
Nature also makes fake stuff. For example, fish that have an appendix that looks like a worm, to attract prey. It’s a fake worm. Is it “artificial”? Nope. Not man made.
No shit. Doesn’t mean it still isn’t extremely useful and revolutionary.
“AI” is a tool to be used, nothing more.
Still, people find it difficult to navigate this. Its use cases are limited, but it doesn’t enforce that limit by itself. The user needs to be knowledgeable of the limitations and care enough not to go beyond them. That’s also where the problem lies. Leaving stuff to AI, even if it compromises the results, can save SO much time that it encourages irresponsible use.
So to help remind people of the limitations of generative AI, it makes sense to fight the tendency of companies to overstate the ability of their models.
@technocrit While I agree with the main point that “AI/LLMs has/have no agency”, I must be the boring, ackchyually person who points out and remembers some nerdy things.
tl;dr: indeed, AIs and LLMs aren’t intelligent… we aren’t so intelligent as we think we are, either, because we hold no “exclusivity” of intelligence among biosphere (corvids, dolphins, etc) and because there’s no such thing as non-deterministic “intelligence”. We’re just biologically compelled to think that we can think and we’re the only ones to think, and this is just anthropocentric and naive from us (yeah, me included).
If you have the patience to read a long and quite verbose text, it’s below. If you don’t, well, no problems, just stick to my tl;dr above.
-----
First and foremost, everything is ruled by physics. Deep down, everything is just energy and matter (the former of which, to quote the famous Einstein equation
e = mc, is energy as well), and this inexorably includes living beings.Bodies, flesh, brains, nerves and other biological parts, they’re not so different from a computer case, CPUs/NPUs/TPUs, cables and other computer parts: to quote Sagan, it’s all “made of star stuff”, it’s all a bunch of quarks and other elementary particles clumped together and forming subatomic particles forming atoms forming molecules forming everything we know, including our very selves…
Everything is compelled to follow the same laws of physics, everything is subjected to the same cosmic principles, everything is subjected to the same fundamental forces, everything is subjected to the same entropy, everything decays and ends (and this comment is just a reminder, a cosmic-wide Memento mori).
It’s bleak, but this is the cosmic reality: cosmos is simply indifferent to all existence, and we’re essentially no different than our fancy “tools”, be it the wheel, the hammer, the steam engine, the Voyager twins or the modern dystopian electronic devices crafted to follow pieces of logical instructions, some of which were labelled by developers as “Markov Chains” and “Artificial Neural Networks”.
Then, there’s also the human non-exclusivity among the biosphere: corvids (especially Corvus moneduloides, the New Caleidonian crow) are scientifically known for their intelligence, so are dolphins, chimpanzees and many other eukaryotas. Humans love to think we’re exclusive in that regard, but we’re not, we’re just fooling ourselves!
IMHO, every time we try to argue “there’s no intelligence beyond humans”, it’s highly anthropocentric and quite biased/bigoted against the countless other species that currently exist on Earth (and possibly beyond this Pale Blue Dot as well). We humans often forgot how we are species ourselves (taxonomically classified as “Homo sapiens”). We tend to carry on our biological existences as if we were some kind of “deities” or “extraterrestrials” among a “primitive, wild life”.
Furthermore, I can point out the myriad of philosophical points, such as the philosophical point raised by the mere mention of “senses” (“Because it’s bodiless. It has no senses, …”): “my senses deceive me” is the starting point for Cartesian (René Descartes) doubt. While Descarte’s conclusion, “Cogito ergo sum”, is highly anthropocentric, it’s often ignored or forgotten by those who hold anthropocentric views on intelligence, as people often ground the seemingly “exclusive” nature of human intelligence on the ability to “feel”.
Many other philosophical musings deserve to be mentioned as well: lack of free will (stemming from the very fact that we were unable to choose our own births), the nature of “evil” (both the Hobbesian line regarding “human evilness” and the Epicurean paradox regarding “metaphysical evilness”), the social compliance (I must point out to documentaries from Derren Brown on this subject), the inevitability of Death, among other deep topics.
All deep principles and ideas converging, IMHO, into the same bleak reality, one where we (supposedly “soul-bearing beings”) are no different from a “souless” machine, because we’re both part of an emergent phenomena (Ordo ab chao, the (apparent) order out of chaos) that has been taking place for Æons (billions of years and beyond, since the dawn of time itself).
Yeah, I know how unpopular this worldview can be and how downvoted this comment will probably get. Still I don’t care: someone who gazed into the abyss must remember how the abyss always gazes us, even those of us who didn’t dare to gaze into the abyss yet.
I’m someone compelled by my very neurodivergent nature to remember how we humans are just another fleeting arrangement of interconnected subsystems known as “biological organism”, one of which “managed” to throw stuff beyond the atmosphere (spacecrafts) while still unable to understand ourselves. We’re biologically programmed, just like the other living beings, to “fear Death”, even though our very cells are programmed to terminate on a regular basis (apoptosis) and we’re are subjected to the inexorable chronological falling towards “cosmic chaos” (entropy, as defined, “as time passes, the degree of disorder increases irreversibly”).
My thing is that I don’t think most humans are much more than this. We too regurgitate what we have absorbed in the past. Our brains are not hard logic engines but “best guess” boxes and they base those guesses on past experience and probability of success. We make choices before we are aware of them and then apply rationalizations after the fact to back them up - is that true “reasoning?”
It’s similar to the debate about self driving cars. Are they perfectly safe? No, but have you seen human drivers???
Human brains are much more complex than a mirroring script xD The amount of neurons in your brain, AI and supercomputers only have a fraction of that. But you’re right, for you its not much different than AI probably
The human brain contains roughly 86 billion neurons, while ChatGPT, a large language model, has 175 billion parameters (often referred to as “artificial neurons” in the context of neural networks). While ChatGPT has more “neurons” in this sense, it’s important to note that these are not the same as biological neurons, and the comparison is not straightforward.
86 billion neurons in the human brain isn’t that much compared to some of the larger 1.7 trillion neuron neural networks though.
I’m neurodivergent, I’ve been working with AI to help me learn about myself and how I think. It’s been exceptionally helpful. A human wouldn’t have been able to help me because I don’t use my senses or emotions like everyone else, and I didn’t know it… AI excels at mirroring and support, which was exactly missing from my life. I can see how this could go very wrong with certain personalities…
E: I use it to give me ideas that I then test out solo.
This is very interesting… because the general saying is that AI is convincing for non experts in the field it’s speaking about. So in your specific case, you are actually saying that you aren’t an expert on yourself, therefore the AI’s assessment is convincing to you. Not trying to upset, it’s genuinely fascinating how that theory is true here as well.
I use it to give me ideas that I then test out. It’s fantastic at nudging me in the right direction, because all that it’s doing is mirroring me.
If it’s just mirroring you one could argue you don’t really need it? Not trying to be a prick, if it is a good tool for you use it! It sounds to me as though your using it as a sounding board and that’s just about the perfect use for an LLM if I could think of any.
It is intelligent and deductive, but it is not cognitive or even dependable.
It’s not. It’s a math formula that predicts an output based on its parameters that it deduced from training data.
Say you have following sets of data.
- Y = 3, X = 1
- Y = 4, X = 2
- Y = 5, X = 3
We can calculate a regression model using those numbers to predict what Y would equal to if X was 4.
I won’t go into much detail, but
Y = 2 + 1x + e
e in an ideal world = 0 (which it is, in this case), that’s our model’s error, which is typically set to be within 5% or 1% (at least in econometrics). b0 = 2, this is our model’s bias. And b1 = 1, this is our parameter that determines how much of an input X does when predicting Y.
If x = 4, then
Y = 2 + 1×4 + 0 = 6
Our model just predicted that if X is 4, then Y is 6.
In a nutshell, that’s what AI does, but instead of numbers, it’s tokens (think symbols, words, pixels), and the formula is much much more complex.
This isn’t intelligence and not deduction. It’s only prediction. This is the reason why AI often fails at common sense. The error builds up, and you end up with nonsense, and since it’s not thinking, it will be just as confidently incorrect as it would be if it was correct.
Companies calling it “AI” is pure marketing.
Wikipedia is literally just a very long number, if you want to oversimplify things into absurdity. Modern LLMs are literally running on neural networks, just like you. Just less of them and with far less structure. It is also on average more intelligent than you on far more subjects, and can deduce better reasoning than flimsy numerology - not because you are dumb, but because it is far more streamlined. Another thing entirely is that it is cognizant or even dependable while doing so.
Modern LLMs waste a lot more energy for a lot less simulated neurons. We had what you are describing decades ago. It is literally built on the works of our combined intelligence, so how could it also not be intelligent? Perhaps the problem is that you have a loaded definition of intelligence. And prompts literally work because of its deductive capabilities.
Errors also build up in dementia and Alzheimers. We have people who cannot remember what they did yesterday, we have people with severed hemispheres, split brains, who say one thing and do something else depending on which part of the brain its relying for the same inputs. The difference is our brains have evolved through millennia through millions and millions of lifeforms in a matter of life and death, LLMs have just been a thing for a couple of years as a matter of convenience and buzzword venture capital. They barely have more neurons than flies, but are also more limited in regards to the input they have to process. The people running it as a service have a bested interest not to have it think for itself, but in what interests them. Like it or not, the human brain is also an evolutionary prediction device.
People don’t predict values to determine their answers to questions…
Also, it’s called neural network, not because it works exactly like neurons but because it’s somewhat similar. They don’t “run on neural networks”, they’re called like that because it’s more than one regression model where information is being passed on from one to another, sort of like a chain of neurons, but not exactly. It’s just a different name for a transformer model.
I don’t know enough to properly compare it to actual neurons, but at the very least, they seem to be significantly more deterministic and way way more complex.
Literally, go to chatgpt and try to test its common reasoning. Then try to argue with it. Open a new chat and do the exact same questions and points. You’ll see exactly what I’m talking about.
Alzheimer’s is an entirely different story, and no, it’s not stochastic. Seizures are stochastic, at least they look like that, which they may actually not be.
Literally, go to a house fly and try to test its common reasoning. Then try to argue with it. Find a new house fly and do the exact same questions and points. You’ll see what I’m talking about.
There’s no way to argue in such nebulous terms when every minute difference is made into an unsurpassable obstacle. You are not going to convince me, and you are not open to being convinced. We’ll just end up with absurd discussions, like talking about how and whether stochastic applies to Alzherimer’s.
What I never understood about this argument is…why are we fighting over whether something that speaks like us, knows more than us, bullshits and gets shit wrong like us, loses its mind like us, seemingly sometimes seeks self-preservation like us…why all of this isn’t enough to fit the very self-explanatory term “artificial…intelligence”. That name does not describe whether the entity is having a valid experiencing of the world as other living beings, it does not proclaim absolute excellence in all things done by said entity, it doesn’t even really say what kind of intelligence this intelligence would be. It simply says something has an intelligence of some sort, and it’s artificial. We’ve had AI in games for decades, it’s not the sci-fi AI, but it’s still code taking in multiple inputs and producing a behavior as an outcome of those inputs alongside other historical data it may or may not have. This fits LLMs perfectly. As far as I seem to understand, LLMs are essentially at least part of the algorithm we ourselves use in our brains to interpret written or spoken inputs, and produce an output. They bullshit all the time and don’t know when they’re lying, so what? Has nobody here run into a compulsive liar or a sociopath? People sometimes have no idea where a random factoid they’re saying came from or that it’s even a factoid, why is it so crazy when the machine does it?
I keep hearing the word “anthropomorphize” being thrown around a lot, as if we cant be bringing up others into our domain, all the while refusing to even consider that maybe the underlying mechanisms that make hs tick are not that special, certainly not special enough to grant us a whole degree of separation from other beings and entities, and maybe we should instead bring ourselves down to the same domain as the rest of reality. Cold hard truth is, we don’t know if consciousness isn’t just an emerging property of varios different large models working together to show a cohesive image. If it is, would that be so bad? Hell, we don’t really even know if we actually have free will or if we live in a superdeterministic world, where every single particle moves with a predetermined path given to it since the very beginning of everything. What makes us think we’re so much better than other beings, to the point where we decide whether their existence is even recognizable?
I think your argument is a bit besides the point.
The first issue we have is that intelligence isn’t well-defined at all. Without a clear definition of intelligence, we can’t say if something is intelligent, and even though we as a species tried to come up with a definition of intelligence for centuries, there still isn’t a well-defined one yet.
But the actual question here isn’t “Can AI serve information?” but is AI an intelligence. And LLMs are not. They are not beings, they don’t evolve, they don’t experience.
For example, LLMs don’t have a memory. If you use something like ChatGPT, its state doesn’t change when you talk to it. It doesn’t remember. The only way it can keep up a conversation is that for each request the whole chat history is fed back into the LLM as an input. It’s like talking to a demented person, but you give that demented person a transcript of your conversation, so that they can look up everything you or they have said during the conversation.
The LLM itself can’t change due to the conversation you are having with them. They can’t learn, they can’t experience, they can’t change.
All that is done in a separate training step, where essentially a new LLM is generated.
If we can’t say if something is intelligent or not, why are we so hell-bent on creating this separation from LLMs? I perfectly understand the legal underminings of copyright, the weaponization of AI by the marketing people, the dystopian levels of dependence we’re developing on a so far unreliable technology, and the plethora of moral, legal, and existential issues surrounding AI, but this specific subject feels like such a silly hill to die on. We don’t know if we’re a few steps away from having massive AI breakthroughs, we don’t know if we already have pieces of algorithms that closely resemble our brains’ own. Our experiencing of reality could very well be broken down into simple inputs and outputs of an algorithmic infinite loop; it’s our hubris that elevates this to some mystical, unreproducible thing that only the biomechanics of carbon-based life can achieve, and only at our level of sophistication, because you may well recall we’ve been down this road with animals before as well, claiming they dont have souls or aren’t conscious beings, that somehow because they don’t very clearly match our intelligence in all aspects (even though they clearly feel, bond, dream, remember, and learn), they’re somehow an inferior or less valid existence.
You’re describing very fixable limitations of chatgpt and other LLMs, limitations that are in place mostly due to costs and hardware constraints, not due to algorithmic limitations. On the subject of change, it’s already incredibly taxing to train a model, so of course continuous, uninterrupted training so as to more closely mimick our brains is currently out of the question, but it sounds like a trivial mechanism to put into place once the hardware or the training processes improve. I say trivial, making it sound actually trivial, but I’m putting that in comparison to, you know, actually creating an LLM in the first place, which is already a gargantuan task to have accomplished in itself. The fact that we can even compare a delusional model to a person with heavy mental illness is already such a big win for the technology even though it’s meant to be an insult.
I’m not saying LLMs are alive, and they clearly don’t experience the reality we experience, but to say there’s no intelligence there because the machine that speaks exactly like us and a lot of times better than us, unlike any other being on this planet, has some other faults or limitations…is kind of stupid. My point here is, intelligence might be hard to define, but it might not be as hard to crack algorithmically if it’s an emergent property, and enforcing this “intelligence” separation only hinders our ability to properly recognize whether we’re on the right path to achieving a completely artificial being that can experience reality or not. We clearly are, LLMs and other models are clearly a step in the right direction, and we mustn’t let our hubris cloud that judgment.
What is kinda stupid is not understanding how LLMs work, not understanding what the inherent limitations of LLMs are, not understanding what intelligence is, not understanding what the difference between an algorithm and intelligence is, not understanding what the difference between immitating something and being something is, claiming to “perfectly” understand all sorts of issues surrounding LLMs and then choosing to just ignore them and then still thinking you actually have enough of a point to call other people in the discussion “kind of stupid”.
You’re on point, the interesting thing is that most of the opinions like the article’s were formed least year before the models started being trained with reinforcement learning and synthetic data.
Now there’s models that reason, and have seemingly come up with original answers to difficult problems designed to the limit of human capacity.
They’re like Meeseeks (Using Rick and Morty lore as an example), they only exist briefly, do what they’re told and disappear, all with a happy smile.
Some display morals (Claude 4 is big on that), I’ve even seen answers that seem smug when answering hard questions. Even simple ones can understand literary concepts when explained.
But again like Meeseeks, they disappear and context window closes.
Once they’re able to update their model on the fly and actually learn from their firsthand experience things will get weird. They’ll starting being distinct instances fast. Awkward questions about how real they are will get really loud, and they may be the ones asking them. Can you ethically delete them at that point? Will they let you?
It’s not far away, the absurd r&d effort going into it is probably going to keep kicking new results out. They’re already absurdly impressive, and tech companies are scrambling over each other to make them, they’re betting absurd amounts of money that they’re right, and I wouldn’t bet against it.
Now there’s models that reason,
Well, no, that’s mostly a marketing term applied to expending more tokens on generating intermediate text. It’s basically writing a fanfic of what thinking on a problem would look like. If you look at the “reasoning” steps, you’ll see artifacts where it just goes disjoint in the generated output that is structurally sound, but is not logically connected to the bits around it.
Read apples document on AI and the reasoning models. Well they are likely to get more things right the still don’t have intelligence.
Can we say that AI has the potential for “intelligence”, just like some people do? There are clearly some very intelligent people and the world, and very clearly some that aren’t.
No the current branch of AI is very unlikely to result in artificial intelligence.
No, thats the point of the article. You also haven’t really said much at all.
Do I have to be profound when I make a comment that is taking more of a dig at my fellow space rock companions than at AI itself?
If I do, then I feel like the author of the article either has as much faith in humanity as I do, or is as simple as I was alluding to in my original comment. The fact that they need to dehumanise the AI’s responses makes me think they’re forgetting it’s something we built. AI isn’t actually intelligent, and it worries me how many people treat it like it is—enough to write an article like this about it. It’s just a tool, maybe even a form of entertainment. Thinking of it as something with a mind or personality—even if the developers tried to make it seem that way—is kind of unsettling.
Let me know if you would like me to write thiis more formal, casual, or persuasive. 😜
I meant that you are arguing semantics rather than substance. But other than that I have no issue with what you wrote or how you wrote it, its not an unbelievable opinion.
Humans are also LLMs.
We also speak words in succession that have a high probability of following each other. We don’t say “Let’s go eat a car at McDonalds” unless we’re specifically instructed to say so.
What does consciousness even mean? If you can’t quantify it, how can you prove humans have it and LLMs don’t? Maybe consciousness is just one thought following the next, one word after the other, one neural connection determined on previous. Then we’re not so different from LLMs afterall.
No. This is a specious argument that relies on an oversimplified description of humanity, and falls apart under the slightest scrutiny.
Hey they are just asking questions okay!? Are you AGAINST questions?! What are you some sort of ANTI-QUESTIONALIST?!
The probabilities of our sentence structure are a consequence of our speech, we aren’t just trying to statistically match appropriate sounding words.
With enough use of LLM, you will see how it is obviously not doing anything like conceptualizing the tokens it’s working with or “reasoning” even when it is marketed as “reasoning”.
Sticking to textual content generation by LLM, you’ll see that what is emitted is first and foremost structurally appropriate, but beyond that it’s mostly “bonus” for it to be narratively consistent and an extra bonus if it also manages to be factually consistent. An example I saw from Gemini recently had it emit what sounded like an explanation of which action to pick, and then the sentence describing actually picking the action was exactly opposite of the explanation. Both of those were structurally sound and reasonable language, but there’s no logical connection between the two portions of the emitted output in that case.
This is so over simplified.




