- cross-posted to:
- technology@lemmy.world
- technology@lemmy.ml
- cross-posted to:
- technology@lemmy.world
- technology@lemmy.ml
We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.
But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasnāt changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence ā based on the data itās been trained on.
This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance ā nothing more, and nothing less.
So why is a real āthinkingā AI likely impossible? Because itās bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesnāt hunger, desire or fear. And because there is no cognition ā not a shred ā thereās a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.
Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the āhard problem of consciousnessā. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).
Given the paramount importance of the human senses and emotion for consciousness to āhappenā, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.
What we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data
Prove to me that this isnāt exactly how the human mind ā i.e., āreal intelligenceā ā works.
The challenge with asserting how ārealā the intelligence-mimicking behavior of LLMs is, is not to convince us that it ājustā is the result of cold deterministic statistical algoritms running on silicon. This we know, because we created them that way.
The real challenge is to convince ourselves that the wetware electrochemical neural unit embedded in our skulls, which evolved through a fairly straightforward process of natural selection to improve our odds at surviving, isnāt relying on statistical models whose inner principles of working are, essentially, the same.
All these claims that human creativity is so outstanding that it āobviouslyā will never be recreated by deterministic statistical models that āonlyā interpolates into new contexts knowledge picked up from observation of human knowledge: I just donāt see it.
What human invention, art, idĆ©, was so truly, undeniably, completely new that it cannot have sprung out of something coming before it? Even the bloody theory of general relativityāheld as one of the pinnacles of human intelligenceāhas clear connections to what came before. If you read Einsteinās works he is actually very good at explaining how he worked it out in increments from models and ideas - āwhat happens with a meter stick in spaceā, etc.: i.e., he was very good at using the tools we have to systematically bring our understanding from one domain into another.
To me, the argument in the linked article reads a bit as āLLM AI cannot be āintelligenceā because when I introspect I donāt feel like a statistical machineā. This seems about as sophisticated as the āI aināt no monkey!ā counter- argument against evolution.
All this is NOT to say that we know that LLM AI = human intelligence. It is a genuinely fascinating scientific question. I just donāt think we have anything to gain from the āI aināt no statistical machineā line of argument.
Prove to me that this isnāt exactly how the human mind ā i.e., āreal intelligenceā ā works.
For one, AI as it currently exists doesnāt learn. The AI has to be trained with its weights adjusted in order to actually update its way of thinking. For another, memory is nothing more than prepending your prompt history to your next prompt, there is no actual mechanism of memory, just this hack.
I canāt prove anything to anyone on the internet because I donāt think itās the right place and also mostly Iām just not qualified in that area but human babies have remarkable cognitive abilities and can decipher complex languages with very little data. I see rationales to define that prowess as more predictive and intuitive than statistical. Probably a bit of everything though.
Besides, what is intelligence? You could very likely define it as inextricably bound to life itself which disqualifies software programs to begin with. Depends on who will give you the definition I suppose.
So why is a real āthinkingā AI likely impossible? Because itās bodiless.
Why would anything need a body to be intelligent? Just because we have bodies and whoever said that cannot imagine different forms of life/intelligence? Not that i think, current LLMs have the experience and creativity to be called intelligent. i just donāt think that everything thatās intelligent needs an arse :)
The title is accurate, but the article doesnāt really provide explanations beyond personal anecdotes. The few quotes and concepts are gestured to, rather than used to build an argument.
The comparison to greenhouse gas warnings came out of left field since they didnāt bring up any direct relationship between the two subjects.
It reads like they expect readers to agree with them.
Any argument about AI and consciousness should point out the difference between ātrueā AI and the LLMs weāre calling AI, and how they work.
https://hackaday.com/2024/05/15/how-ai-large-language-models-work-explained-without-math/
Hereās more information on AI and consciousness:
But I believe that if AIs are passing the Turing test, we need to update the test.
Uhh thatās kind of not how tests are supposed to work. If you want non-falsifiable conviction in human specialness, maybe try religion instead.