We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

https://archive.ph/Fapar

  • benni@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    1
    ·
    5 days ago

    I think we should start by not following this marketing speak. The sentence “AI isn’t intelligent” makes no sense. What we mean is “LLMs aren’t intelligent”.

    • innermachine@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      1
      ·
      5 days ago

      So couldn’t we say LLM’s aren’t really AI? Cuz that’s what I’ve seen to come to terms with.

      • TheGrandNagus@lemmy.world
        link
        fedilink
        English
        arrow-up
        19
        ·
        5 days ago

        To be fair, the term “AI” has always been used in an extremely vague way.

        NPCs in video games, chess computers, or other such tech are not sentient and do not have general intelligence, yet we’ve been referring to those as “AI” for decades without anybody taking an issue with it.

        • benni@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          4 days ago

          It’s true that the word has always been used loosely, but there was no issue with it because nobody believed what was called AI to have actual intelligence. Now this is no longer the case, and so it becomes important to be more clear with our words.

        • MajorasMaskForever@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          5 days ago

          I don’t think the term AI has been used in a vague way, it’s that there’s a huge disconnect between how the technical fields use it vs general populace and marketing groups heavily abuse that disconnect.

          Artificial has two meanings/use cases. One is to indicate something is fake (video game NPC, chess bots, vegan cheese). The end product looks close enough to the real thing that for its intended use case it works well enough. Looks like a duck, quacks like a duck, treat it like a duck even though we all know it’s a bunny with a costume on. LLMs on a technical level fit this definition.

          The other definition is man made. Artificial diamonds are a great example of this, they’re still diamonds at the end of the day, they have all the same chemical makeups, same chemical and physical properties. The only difference is they came from a laboratory made by adult workers vs child slave labor.

          My pet theory is science fiction got the general populace to think of artificial intelligence to be using the “man-made” definition instead of the “fake” definition that these companies are using. In the past the subtle nuance never caused a problem so we all just kinda ignored it

          • El Barto@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            5 days ago

            Dafuq? Artificial always means man-made.

            Nature also makes fake stuff. For example, fish that have an appendix that looks like a worm, to attract prey. It’s a fake worm. Is it “artificial”? Nope. Not man made.

              • atrielienz@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                ·
                edit-2
                4 days ago

                Word roots say they have a point though. Artifice, Artificial etc. I think the main problem with the way both of the people above you are using this terminology is that they’re focusing on the wrong word and how that word is being conflated with something it’s not.

                LLM’s are artificial. They are a man made thing that is intended to fool man into believing they are something they aren’t. What we’re meant to be convinced they are is sapiently intelligent.

                Mimicry is not sapience and that’s where the argument for LLM’s being real honest to God AI falls apart.

                Sapience is missing from Generative LLM’s. They don’t actually think. They don’t actually have motivation. What we’re doing when we anthropomorphize them is we are fooling ourselves into thinking they are a man-made reproduction of us without the meat flavored skin suit. That’s not what’s happening. But some of us are convinced that it is, or that it’s near enough that it doesn’t matter.

      • herrvogel@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 days ago

        LLMs are one of the approximately one metric crap ton of different technologies that fall under the rather broad umbrella of the field of study that is called AI. The definition for what is and isn’t AI can be pretty vague, but I would argue that LLMs are definitely AI because they exist with the express purpose of imitating human behavior.

        • El Barto@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          5 days ago

          Huh? Since when an AI’s purpose is to “imitate human behavior”? AI is about solving problems.

          • herrvogel@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            5 days ago

            It is and it isn’t. Again, the whole thing is super vague. Machine vision or pattern seeking algorithms do not try to imitate any human behavior, but they fall under AI.

            Let me put it this way: Things that try to imitate human behavior or intelligence are AI, but not all AI is about trying to imitate human behavior or intelligence.

            • Buddahriffic@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              5 days ago

              From a programming pov, a definition of AI could be an algorithm or construct that can solve problems or perform tasks without the programmer specifically solving that problem or programming the steps of the task but rather building something that can figure it out on its own.

              Though a lot of game AIs don’t fit that description.

            • El Barto@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              5 days ago

              I can agree with “things that try to imitate human intelligence” but not “human behavior”. An Elmo doll laughs when you tickle it. That doesn’t mean it exhibits artificial intelligence.

      • Melvin_Ferd@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        5 days ago

        can say whatever the fuck we want. This isn’t any kind of real issue. Think about it. If you went the rest of your life calling LLM’s turkey butt fuck sandwhichs, what changes? This article is just shit and people looking to be outraged over something that other articles told them to be outraged about. This is all pure fucking modern yellow journalism. I hope turkey butt sandwiches replace every journalist. I’m so done with their crap

    • undeffeined@lemmy.ml
      link
      fedilink
      English
      arrow-up
      11
      ·
      5 days ago

      I make the point to allways refer to it as LLM exactly to make the point that it’s not an Inteligence.

  • Knock_Knock_Lemmy_In@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    2
    ·
    5 days ago

    So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure.

    This is not a good argument.

    • bitjunkie@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      3
      ·
      5 days ago

      philosopher

      Here’s why. It’s a quote from a pure academic attempting to describe something practical.

      • Knock_Knock_Lemmy_In@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        5 days ago

        The philosopher has made an unproven assumption. An erroneously logical leap. Something an academic shouldn’t do.

        Just because everything we currently consider conscious has a physical presence, does not imply that consciousness requires a physical body.

  • Bogasse@lemmy.ml
    link
    fedilink
    English
    arrow-up
    13
    ·
    5 days ago

    The idea that RAGs “extend their memory” is also complete bullshit. We literally just finally build working search engine, but instead of using a nice interface for it we only let chatbots use them.

  • El Barto@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    edit-2
    5 days ago

    I agreed with most of what you said, except the part where you say that real AI is impossible because it’s bodiless or “does not experience hunger” and other stuff. That part does not compute.

    A general AI does not need to be conscious.

    • NιƙƙιDιɱҽʂ@lemmy.world
      cake
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      edit-2
      5 days ago

      That and there is literally no way to prove something is or isn’t conscious. I can’t even prove to another human being that I’m a conscious entity, you just have to assume I am because from your own experience, you are so therefor I too must be, right?

      Not saying I consider AI in it’s current form to be conscious, more so the whole idea is just silly and unfalsifiable.

  • scarabic@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    9
    ·
    5 days ago

    My thing is that I don’t think most humans are much more than this. We too regurgitate what we have absorbed in the past. Our brains are not hard logic engines but “best guess” boxes and they base those guesses on past experience and probability of success. We make choices before we are aware of them and then apply rationalizations after the fact to back them up - is that true “reasoning?”

    It’s similar to the debate about self driving cars. Are they perfectly safe? No, but have you seen human drivers???

    • fishos@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      3
      ·
      edit-2
      4 days ago

      I’ve been thinking this for awhile. When people say “AI isn’t really that smart, it’s just doing pattern recognition” all I can help but think is “don’t you realize that is one of the most commonly brought up traits concerning the human mind?” Pareidolia is literally the tendency to see faces in things because the human mind is constantly looking for the “face pattern”. Humans are at least 90% regurgitating previous data. It’s literally why you’re supposed to read and interact with babies so much. It’s how you learn “red glowy thing is hot”. It’s why education and access to knowledge is so important. It’s every annoying person who has endless “did you know?” facts. Science is literally “look at previous data, iterate a little bit, look at new data”.

      None of what AI is doing is truly novel or different. But we’ve placed the human mind on this pedestal despite all the evidence to the contrary. Eyewitness testimony, optical illusions, magic tricks, the hundreds of common fallacies we fall prey to… our minds are incredibly fallible and are really just a hodgepodge of processes masquerading as “intelligence”. We’re a bunch of instincts in a trenchcoat. To think AI isn’t or can’t reach our level is just hubris. A trait that probably is more unique to humans.

      • scarabic@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 days ago

        Yep we are on the same page. At our best, we can reach higher than regurgitating patterns. I’m talking about things like the scientific method and everything we’ve learned by it. But still, that’s a 5% minority, at best, of what’s going on between human ears.

    • Puddinghelmet@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      edit-2
      4 days ago

      Human brains are much more complex than a mirroring script xD The amount of neurons in your brain, AI and supercomputers only have a fraction of that. But you’re right, for you its not much different than AI probably

      • scarabic@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        I’m pretty sure an AI could throw out a lazy straw man and ad hominem as quickly as you did.

      • TangledHyphae@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        4 days ago

        The human brain contains roughly 86 billion neurons, while ChatGPT, a large language model, has 175 billion parameters (often referred to as “artificial neurons” in the context of neural networks). While ChatGPT has more “neurons” in this sense, it’s important to note that these are not the same as biological neurons, and the comparison is not straightforward.

        86 billion neurons in the human brain isn’t that much compared to some of the larger 1.7 trillion neuron neural networks though.

  • RalphWolf@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    6 days ago

    Steve Gibson on his podcast, Security Now!, recently suggested that we should call it “Simulated Intelligence”. I tend to agree.

    • pyre@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 days ago

      reminds me of Mass Effect’s VI, “virtual intelligence”: a system that’s specifically designed to be not truly intelligent, as AI systems are banned throughout the galaxy for its potential to go rogue.

      • Repple (she/her)@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        5 days ago

        Same, I tend to think of llms as a very primitive version of that or the enterprise’s computer, which is pretty magical in ability, but no one claims is actually intelligent

  • Geodad@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    13
    ·
    6 days ago

    I’ve never been fooled by their claims of it being intelligent.

    Its basically an overly complicated series of if/then statements that try to guess the next series of inputs.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      7
      ·
      5 days ago

      It very much isn’t and that’s extremely technically wrong on many, many levels.

      Yet still one of the higher up voted comments here.

      Which says a lot.

      • El Barto@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        5 days ago

        I’ll be pedantic, but yeah. It’s all transistors all the way down, and transistors are pretty much chained if/then switches.

      • Blue_Morpho@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        5 days ago

        Given that the weights in a model are transformed into a set of conditional if statements (GPU or CPU JMP machine code), he’s not technically wrong. Of course, it’s more than just JMP and JMP represents the entire class of jump commands like JE and JZ. Something needs to act on the results of the TMULs.

  • mechoman444@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    edit-2
    5 days ago

    In that case let’s stop calling it ai, because it isn’t and use it’s correct abbreviation: llm.

  • psycho_driver@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    5 days ago

    Hey AI helped me stick it to the insurance man the other day. I was futzing around with coverage amounts on one of the major insurance companies websites pre-renewal to try to get the best rate and it spit up a NaN renewal amount for our most expensive vehicle. It let me go through with the renewal less that $700 and now says I’m paid in full for the six month period. It’s been days now with no follow-up . . . I’m pretty sure AI snuck that one through for me.

  • aceshigh@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    10
    ·
    edit-2
    5 days ago

    I’m neurodivergent, I’ve been working with AI to help me learn about myself and how I think. It’s been exceptionally helpful. A human wouldn’t have been able to help me because I don’t use my senses or emotions like everyone else, and I didn’t know it… AI excels at mirroring and support, which was exactly missing from my life. I can see how this could go very wrong with certain personalities…

    E: I use it to give me ideas that I then test out solo.

    • Snapz@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      ·
      5 days ago

      This is very interesting… because the general saying is that AI is convincing for non experts in the field it’s speaking about. So in your specific case, you are actually saying that you aren’t an expert on yourself, therefore the AI’s assessment is convincing to you. Not trying to upset, it’s genuinely fascinating how that theory is true here as well.

      • aceshigh@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 days ago

        I use it to give me ideas that I then test out. It’s fantastic at nudging me in the right direction, because all that it’s doing is mirroring me.

        • innermachine@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          5 days ago

          If it’s just mirroring you one could argue you don’t really need it? Not trying to be a prick, if it is a good tool for you use it! It sounds to me as though your using it as a sounding board and that’s just about the perfect use for an LLM if I could think of any.

    • PushButton@lemmy.world
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      3
      ·
      edit-2
      5 days ago

      That sounds fucking dangerous… You really should consult a HUMAN expert about your problem, not an algorithm made to please the interlocutor…

  • Lovable Sidekick@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    3
    ·
    6 days ago

    Amen! When I say the same things this author is saying I get, “It’S NoT StAtIsTiCs! LeArN aBoUt AI bEfOrE yOu CoMmEnT, dUmBaSs!”

  • Kiwi_fella@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    10
    ·
    4 days ago

    Can we say that AI has the potential for “intelligence”, just like some people do? There are clearly some very intelligent people and the world, and very clearly some that aren’t.

  • doodledup@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    18
    ·
    4 days ago

    Humans are also LLMs.

    We also speak words in succession that have a high probability of following each other. We don’t say “Let’s go eat a car at McDonalds” unless we’re specifically instructed to say so.

    What does consciousness even mean? If you can’t quantify it, how can you prove humans have it and LLMs don’t? Maybe consciousness is just one thought following the next, one word after the other, one neural connection determined on previous. Then we’re not so different from LLMs afterall.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      4 days ago

      The probabilities of our sentence structure are a consequence of our speech, we aren’t just trying to statistically match appropriate sounding words.

      With enough use of LLM, you will see how it is obviously not doing anything like conceptualizing the tokens it’s working with or “reasoning” even when it is marketed as “reasoning”.

      Sticking to textual content generation by LLM, you’ll see that what is emitted is first and foremost structurally appropriate, but beyond that it’s mostly “bonus” for it to be narratively consistent and an extra bonus if it also manages to be factually consistent. An example I saw from Gemini recently had it emit what sounded like an explanation of which action to pick, and then the sentence describing actually picking the action was exactly opposite of the explanation. Both of those were structurally sound and reasonable language, but there’s no logical connection between the two portions of the emitted output in that case.