- cross-posted to:
- science@lemmy.world
- technology@lemmy.ml
- cross-posted to:
- science@lemmy.world
- technology@lemmy.ml
We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.
But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasnāt changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence ā based on the data itās been trained on.
This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance ā nothing more, and nothing less.
So why is a real āthinkingā AI likely impossible? Because itās bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesnāt hunger, desire or fear. And because there is no cognition ā not a shred ā thereās a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.
Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the āhard problem of consciousnessā. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).
Given the paramount importance of the human senses and emotion for consciousness to āhappenā, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.
I think we should start by not following this marketing speak. The sentence āAI isnāt intelligentā makes no sense. What we mean is āLLMs arenāt intelligentā.
So couldnāt we say LLMās arenāt really AI? Cuz thatās what Iāve seen to come to terms with.
To be fair, the term āAIā has always been used in an extremely vague way.
NPCs in video games, chess computers, or other such tech are not sentient and do not have general intelligence, yet weāve been referring to those as āAIā for decades without anybody taking an issue with it.
Itās true that the word has always been used loosely, but there was no issue with it because nobody believed what was called AI to have actual intelligence. Now this is no longer the case, and so it becomes important to be more clear with our words.
I donāt think the term AI has been used in a vague way, itās that thereās a huge disconnect between how the technical fields use it vs general populace and marketing groups heavily abuse that disconnect.
Artificial has two meanings/use cases. One is to indicate something is fake (video game NPC, chess bots, vegan cheese). The end product looks close enough to the real thing that for its intended use case it works well enough. Looks like a duck, quacks like a duck, treat it like a duck even though we all know itās a bunny with a costume on. LLMs on a technical level fit this definition.
The other definition is man made. Artificial diamonds are a great example of this, theyāre still diamonds at the end of the day, they have all the same chemical makeups, same chemical and physical properties. The only difference is they came from a laboratory made by adult workers vs child slave labor.
My pet theory is science fiction got the general populace to think of artificial intelligence to be using the āman-madeā definition instead of the āfakeā definition that these companies are using. In the past the subtle nuance never caused a problem so we all just kinda ignored it
Dafuq? Artificial always means man-made.
Nature also makes fake stuff. For example, fish that have an appendix that looks like a worm, to attract prey. Itās a fake worm. Is it āartificialā? Nope. Not man made.
May I present to you:
The Marriam-Webster Dictionary
https://www.merriam-webster.com/dictionary/artificial
Definition #3b
Word roots say they have a point though. Artifice, Artificial etc. I think the main problem with the way both of the people above you are using this terminology is that theyāre focusing on the wrong word and how that word is being conflated with something itās not.
LLMās are artificial. They are a man made thing that is intended to fool man into believing they are something they arenāt. What weāre meant to be convinced they are is sapiently intelligent.
Mimicry is not sapience and thatās where the argument for LLMās being real honest to God AI falls apart.
Sapience is missing from Generative LLMās. They donāt actually think. They donāt actually have motivation. What weāre doing when we anthropomorphize them is we are fooling ourselves into thinking they are a man-made reproduction of us without the meat flavored skin suit. Thatās not whatās happening. But some of us are convinced that it is, or that itās near enough that it doesnāt matter.
Thanks. I stand corrected.
LLMs are one of the approximately one metric crap ton of different technologies that fall under the rather broad umbrella of the field of study that is called AI. The definition for what is and isnāt AI can be pretty vague, but I would argue that LLMs are definitely AI because they exist with the express purpose of imitating human behavior.
Huh? Since when an AIās purpose is to āimitate human behaviorā? AI is about solving problems.
It is and it isnāt. Again, the whole thing is super vague. Machine vision or pattern seeking algorithms do not try to imitate any human behavior, but they fall under AI.
Let me put it this way: Things that try to imitate human behavior or intelligence are AI, but not all AI is about trying to imitate human behavior or intelligence.
From a programming pov, a definition of AI could be an algorithm or construct that can solve problems or perform tasks without the programmer specifically solving that problem or programming the steps of the task but rather building something that can figure it out on its own.
Though a lot of game AIs donāt fit that description.
I can agree with āthings that try to imitate human intelligenceā but not āhuman behaviorā. An Elmo doll laughs when you tickle it. That doesnāt mean it exhibits artificial intelligence.
Llms are really good relational databases, not an intelligence, imo
can say whatever the fuck we want. This isnāt any kind of real issue. Think about it. If you went the rest of your life calling LLMās turkey butt fuck sandwhichs, what changes? This article is just shit and people looking to be outraged over something that other articles told them to be outraged about. This is all pure fucking modern yellow journalism. I hope turkey butt sandwiches replace every journalist. Iām so done with their crap
I make the point to allways refer to it as LLM exactly to make the point that itās not an Inteligence.