- cross-posted to:
- technology@lemmy.ml
- science@lemmy.world
- cross-posted to:
- technology@lemmy.ml
- science@lemmy.world
We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.
But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasnāt changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence ā based on the data itās been trained on.
This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance ā nothing more, and nothing less.
So why is a real āthinkingā AI likely impossible? Because itās bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesnāt hunger, desire or fear. And because there is no cognition ā not a shred ā thereās a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.
Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the āhard problem of consciousnessā. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).
Given the paramount importance of the human senses and emotion for consciousness to āhappenā, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.
As someone whoās had two kids since AI really vaulted onto the scene, I am enormously confused as to why people think AI isnāt or, particularly, canāt be sentient. I hate to be that guy who pretend to be the parenting expert online, but most of the people I know personally who take the non-sentient view on AI donāt have kids. The other side usually does.
People love to tout this as some sort of smoking gun. That feels like a trap. Obviously, we can argue about the age children gain sentience, but my year and a half old daughter is building an LLM with pattern recognition, tests, feedback, hallucinations. My son is almost 5, and he was and is the same. He told me the other day that a petting zoo came to the school. He was adamant it happened that day. I know for a fact it happened the week before, but he insisted. He told me later that day his friendās dad was in jail for threatening her mom. That was true, but looked to me like another hallucination or more likely a misunderstanding.
And as funny as it would be to argue that theyāre both sapient, but not sentient, I donāt think thatās the case. I think you can make the case that without true volition, AI is sentient but not sapient. Iād love to talk to someone in the middle of the computer science and developmental psychology Venn diagram.
Iām a computer scientist that has a child and I donāt think AI is sentient at all. Even before learning a language, children have their own personality and willpower which is something that I donāt see in AI.
I left a well paid job in the AI industry because the mental gymnastics required to maintain the illusion was too exhausting. I think most people in the industry are aware at some level that they have to participate in maintaining the hype to secure their own jobs.
The core of your claim is basically that āpeople who donāt think AI is sentient donāt really understand sentienceā. I think thatās both reductionist and, frankly, a bit arrogant.
Couldnāt agree more - there are some wonderful insights to gain from seeing your own kids grow up, but I donāt think this is one of them.
Kids are certainly building a vocabulary and learning about the world, but LLMs donāt learn.
Not to get philosophical but to answer you we need to answer what is sentient.
Is it just observable behavior? If so then wouldnāt Kermit the frog be sentient?
Or does sentience require something more, maybe qualia or some othet subjective.
If your son says ādad i got to go pottyā is that him just using a llm to learn those words equals going to tge bathroom? Or is he doing something more?
Not that person, but an Interesting lecture on that topic