• itkovian@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    3
    ·
    6 days ago

    All I ask is in what way are LLMs progress. Ability to generate a lot of slop is pretty much only thing LLMs are good for. Even that is not really cheap, especially factoring the environmental costs.

    • mhague@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      edit-2
      6 days ago

      How much do you know about transformers?

      Have you ever programmed an interpreter for interactive fiction / MUDs, before all this AI crap? It’s a great example of the power that even super tiny models can accomplish. NLP interfaces are a useful thing for people.

      Also consider that Firefox or Electron apps require more RAM and CPU and waste more energy than small language models. A Gemma slm can translate things into English using less energy than it requires to open a modern browser. And I know that because I’m literally watching the resources get used.

      • itkovian@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        6 days ago

        I am not implying that transformers-based models have to be huge to be useful. I am only talking about LLMs. I am questioning the purported goal of LLMs, i.e., to replace all humans in as many creative fields as possible, in the context of it’s cost, both environmental and social.

    • salty_chief@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      14
      ·
      6 days ago

      Sure everything starts with meager beginnings. The AI you’re upset about existing may find the cure to many diseases. It may save the planet one day.