• MehBlah@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    1
    ·
    3 hours ago

    Oh no! How will they know how to do things now?

    Edit: I see Oh no! is the go to reaction ;)

    • mrgoosmoos@lemmy.ca
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      5 hours ago

      it’s a few other things, too

      but overwhelmingly, yep, crutch for dumb and/or lazy people

      • sobchak@programming.dev
        link
        fedilink
        English
        arrow-up
        13
        ·
        3 hours ago

        I find it detrimental to my productivity when integrated into an editor/IDE. I’ve found the “autocomplete” causes subtle bugs that I end up overlooking because I’m trying to go fast and putting too much trust in the generated lines/snippets. Tracing down these bugs becomes a huge time-sink. I do use chatbots in the browser for various things; mostly as a kind of “search” for alternative ways of doing things, frameworks, libraries, and algorithms. Agentic vibe-coding is ok for small one-off tools/scripts you wouldn’t need to maintain, IMO.

      • NotMyOldRedditName@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        2
        ·
        4 hours ago

        You can tell who’s going to grow up into the current generations tech illiterate elderly based on how people talk about AI today.

          • NotMyOldRedditName@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            19 minutes ago

            If your stance is

            AI is a crutch for dumb people.

            You’re right on track to be that 70 year old raising his cane in the air ranting about the useless AI stuff going on and now you can’t figure out how to get our social security check because it uses that new AI based system.

        • pahlimur@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          2 hours ago

          When AI is actually invented I’ll call it AI. Right now we have a steroid juiced parrot that’s based on old school machine learning. Its great at summarizing simple data, but terrible at real tasks.

          This is more people who aren’t dumb telling the marketing teams to stop hyping something that doesn’t exist. The dot com boom is echoing. The profit will never materialize.

          • NotMyOldRedditName@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            3
            ·
            1 hour ago

            But the profit absolutely can materialize because it is useful.

            Right now the problem is hardware / data center costs, but those can come down at a per user level.

            They just need to make it useful enough within those cost constants which is 100% without a doubt possible, it’s just a matter of can they do it before they run out of money.

            Edit: for example, nvidia giving OpenAI hardware for ownership helps bring down their costs, which gives them a longer runway to find that sweet spot.

            • pahlimur@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 hour ago

              The current machine learning models (AI for the stupid) rely on input data, which is running out.

              Processing power per watt is stagnating. Moors law hasn’t been true for years.

              Who will pay for these services? The dot com bubble destroyed everyone who invested in it. Those that “survived” sprouted off of the corpse of that recession. LLMs will probably survive, but not in the way you assume.

              Nvidia helping openAI survive is a sign that the bubble is here and ready to blow.

              • NotMyOldRedditName@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                58 minutes ago

                rely on input data, which is running out.

                Thats part of the equation, but there is still a lot of work that can be done to optimize the usage of the llms themselves, and the more optimized and refined they are, the cheaper it becomes to run, and you can also use even bigger datasets that weren’t feasible before.

                I think there’s also a lot of room to still optimize the data in the data set. Ingesting the entire worlds information doesn’t lead to the best output, especially if you’re going into something more factual vs creative like a LLM trained to assist with programming in a specific language.

                And people ARE paying for it today, OpenAI has billions in revenue, the problem is the hardware is so expensive, the data centeres needed to run it are also expensive. They need to continue optimizing things to narrow that gap. Open AI charges $20 USD/month for their base paid plan. They have millions of paying customers, but millions isn’t enough to offset their costs.

                So they can

                1. reduce costs so millions is enough
                2. make it more useful so they can gain more users.

                This is so early that they have room to both improve 1 and 2.

                But like I said, they (and others like them) need to figure that out before they run out of money and everything falls apart and needs to be built back up in a more sustainable way.

                We won’t know if they can or can’t until they do it, or it pops.

                • pahlimur@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  47 minutes ago

                  None of this is true.

                  I’ve worked on data centers monitoring power consumption, we need to stop calling LLM power sinks the same thing as data centers. Its basically whitewashing the power sucking environmental disasters that they are.

                  Machine learning is what you are describing. LLMs being puppeted as AI is destructive marketing and nothing more.

                  LLMs are somewhat useful at dumb tasks and they do a pretty dumb job at it. They feel like when I was new at my job and for decades could produce mediocre bullshit, but I was too naive to know it sucked. You can’t see how much they suck yet because you lack experience in the areas you use them in.

                  Your two cost saving points are pulled from nowhere just like how LLM inference works.

            • redwattlebird @lemmings.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              53 minutes ago

              It is unlikely to turn a profit because the returns need to be greater than the investment for there to be any profit. The trends show that very few want to pay for this service. I mean, why would you pay for something that’s the equivalent of asking someone online or in person for free or very little cost by comparison?

              Furthermore, it’s a corporation that steals from you and doesn’t want to be held accountable for anything. For example, the chat bot suicides and the fact that their business model would fall over if they actually had to pay for the data that they use to train their models.

              The whole thing is extremely inefficient and makes us more dumb via atrophy. Why would anyone want to third party their thinking process? It’s like thinking everyone wants mobility scooters.

              • NotMyOldRedditName@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                47 minutes ago

                These companies have BILLIONS in revenue and millions of customers, and you’re saying very few want to pay…

                The money is there, they just need to optimize the LLMs to run more efficiently (this is continually progressing), and the hardware side work on reducing hardware costs as well (including electricity usage / heat generation). If OpenAI can build a datacenter that re-uses all it’s heat for example to heat a hospital nearby, that’s another step towards reaching profitability.

                I’m not saying this is an easy problem to solve, but you’re making it sound no one wants it and they can never do it.

                • redwattlebird @lemmings.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  15 minutes ago

                  These companies have BILLIONS in revenue and millions of customers, and you’re saying very few want to pay…

                  Yep, I am. Just follow the money. Here’s an example:

                  https://www.theregister.com/2025/10/29/microsoft_earnings_q1_26_openai_loss/

                  not saying this is an easy problem to solve, but you’re making it sound no one wants it and they can never do it.

                  … That’s all in your head, mate. I never said that nor did I imply it.

                  What I am implying is that the uptake is so small compared to the investment that it is unlikely to turn a profit.

                  If OpenAI can build a datacenter that re-uses all it’s heat for example to heat a hospital nearby, that’s another step towards reaching profitability.

                  😐

                  I’ve worked in the building industry for over 20 years. This is simply not feasible both from a material standpoint and physics standpoint.

                  I know it’s an example, but this kind of rhetoric is exactly the kind of wishful thinking that I see in so many people who want LLMs to be a main staple of our everyday lives. Scratch the surface and it’s all just fantasy.

                • pahlimur@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  31 minutes ago

                  It’s not easy to solve because its not possible to solve. ML has been around since before computers, it’s not magically going to get efficient. The models are already optimized.

                  Revenue isn’t profit. These companies are the biggest cost sinks ever.

                  Heating a single building is a joke marketing tactic compared to the actual energy impact these LLM energy sinks have.

                  I’m an automation engineer, LLMs suck at anything cutting edge. Its basically a mainstream knowledge reproducer with no original outputs. Meaning it can’t do anything that isnt already done.

        • kameecoding@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 hour ago

          I like this comment because both AI haters and people who see that there are some upsides to it can read it using their own bias and agree with it.

      • arbo@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        4 hours ago

        the thing is, it’s not 100% bad, but it’s being crammed into everything because the capitalists want to sell sell sell. sometimes what is made sucks, and will definitely contribute to a dead internet.

        but i also lean on it to generate repetitive bits of code. i still read it all and tweak considerably and it’s cool to make my gpu do work in this way.

        • Taleya@aussie.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          28 minutes ago

          I meep saying it but it’s true: this is dotcom mkIi.

          Inchoate tech had coked up mba monkeys blow it up and now we’re gonna lose about 20 years of useful shit to their stupidity as we blob through the trough of disillusionment

        • NotMyOldRedditName@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          4 hours ago

          Ya, I don’t want it shoved in my face. I want to choose how and where I use it without them trying to compromise my entire device in the process. Fuck what windows is doing for example.

    • corsicanguppy@lemmy.ca
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      6
      ·
      6 hours ago

      Well it’s going to put a damper on my Ansible “coding”.

      You think I want to properly learn that piece of junk? It was obsolete and archaic before it was released, and it survives on naivete and churn cost and nothing else. There is no part of my time doing yaml for Ansible that I want to actually retain or build on, and without chatGPT to slop-in the changes I need to make, I may be forced to do it myself. And I lack the crayons now and alcohol for after.

      Actually subjecting my brain to Ansible directly in real-time is a horror. It is just so fucking lame compared to everything else – it even pales compared to the DevOps we were doing in 2002 before it was even called that. Let my have my robots to slop the Ansible and save my sanity !

          • herrvogel@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            ·
            2 hours ago

            Because whitespace sensitivity makes it very easy to make a whole bunch of annoying mistakes when shuffling code around, or copying it from one source to another (from text in one application to the editor in your ide for example). I find it supremely unpleasant to work with. Looking kinda a little bit slightly messed up should not be a critical syntax error that breaks the whole code.

        • frongt@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          I agree. I think ansible is great in principle, but the documentation is severely lacking. Like it’ll tell you to set a value, but not whether it’s supposed to be in yaml, environment, or something else. And if it’s in yaml, it doesn’t tell you the required context to make it valid. But when someone has taken all the documentation, all the tutorials, the articles, the example code, working code, and stack overflow answers and put them all into a blender, often a useful answer comes out.

    • alias_qr_rainmaker@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      59
      ·
      9 hours ago

      seriously! i used to use chatgpt all the damn time but then i got into claude and gemini. they are WAY better for code. now i got cursor pro and i use that for all my shit because it’s got the AI agents, the browser, the code editor, and the terminal

    • cryptix@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      62
      ·
      10 hours ago

      It works wonderfully well as a search engine, when I have to find obscure specialized info. Can always criss check once I have a idea.

      • bridgeenjoyer@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        30
        arrow-down
        1
        ·
        8 hours ago

        Because companies destroyed actual search engines in the race for billions of dollars.

        Kagi, searx are fricken awesome and much like the web in mid 2000s before corporations destroyed it.

      • jaybone@lemmy.zip
        link
        fedilink
        English
        arrow-up
        87
        arrow-down
        10
        ·
        10 hours ago

        Regular search engines did that 20 years ago, without blowing out the power grid.

        • SlimePirate@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          4
          ·
          5 hours ago

          This is a bad faith argument. Search engines are notoriously bad to find rare specialized information and usually return empty search results for too specific requests. Moreover you need the exact keywords while LLMs use embeddings to find similar meanings

        • Chozo@fedia.io
          link
          fedilink
          arrow-up
          5
          ·
          6 hours ago

          Search engines haven’t worked reliably for several years now, the top results for almost any search are from social media pages that you can’t even read without an account. The Internet is broken.

        • Black616Angel@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          3
          ·
          7 hours ago

          No they didn’t and they still don’t really do that.

          There are too many things (nowadays?) where you have to literally write a question on reddit, stack overflow or Lemmy or the likes and explain your situation in minute detail, because what you find online through search engines is only the standard case which just so happens to not work for you for some odd reason.

          Believe me, when I say that, because I always try search engines first, second and third, before even thinking of using some bs-spitting AI, but it really helped me with two very special problems in the last month.

          • phutatorius@lemmy.zip
            link
            fedilink
            English
            arrow-up
            4
            ·
            5 hours ago

            what you find online through search engines is only the standard case which just so happens to not work for you for some odd reason

            Usually because the highest-rated solution is half-assed bullshit proposed by an overconfident newbie (or an LLM regurgitating it). I mainly use Stack Overflow as a way to become pissed off enough that I’ll go solve the problem myself, like I should have done in the first place. Indignation As A Service.

            • Black616Angel@discuss.tchncs.de
              link
              fedilink
              English
              arrow-up
              1
              ·
              5 hours ago

              This is also in part true.
              Today I was searching for multiple things regarding jinja2 and was always recommended a site that no longer exists, as top result, mind you.

          • RememberTheApollo_@lemmy.world
            link
            fedilink
            English
            arrow-up
            12
            ·
            7 hours ago

            And LLM’s aren’t gamed? Like Grok constantly being tweaked to not say anything inconvenient about Musk? Or ChatGPT citing absurd Reddit posts deliberately made by users to make AI responses wrong?

            AI is built from the ground up to do what they want, and they’re no better than those crappy info-scraper sites like wearethewindoezproz dot com that scrape basic info off every other site and offer it as a solution to your problem with [SOLVED] in the result title. “Did you turn it off and on again?”

          • kadu@scribe.disroot.org
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            1
            ·
            8 hours ago

            The “people learned how to game it” is called SEO, and you’re right, they did.

            Guess what, there’s GEO to game the results of LLMs. It works just as well, is harder to spot, and traditional SEO platforms like Ahrefs and SEMRush are already training users on how to do it.

            So congrats, the argument that using LLMs for search is s good solution because people learned how to game search engines makes no sense.

        • Grimy@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          15
          ·
          8 hours ago

          And now we have something better. I’m all for a better grid running on renewables though, which is the actual problem.

      • BlameTheAntifa@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        6 hours ago

        This is primarily because search engines have become so unreliable and enshittified that they are useless. It’s not a mark in favor of AI as much as a reminder of how bad search engines have become.

        For the record I do the same thing after failing to find anything on DuckDuckGo after multiple attempts. Maybe I should give Kagi a try, but AI is making the entire internet worse, so I feel pessimistic about that, too.

      • IngeniousRocks (They/She) @lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        2
        ·
        edit-2
        7 hours ago

        Its not a search engine, its a data digester. Dont use it as a search engine. Despite what alphabet, micro-shit, and DDG think, AI chatbots do not now, nor will they ever make good search engines.

        This is a prime example of why access to these tools should be restricted to computer scientists and research labs. The average person doesn’t know how to use them effectively (resulting in enormous power wasted by ‘prompt engineering’), and the standard available models aren’t good at digesting non-linguistic data.

        I’m not gonna downvote you, or be like all “AI is the devil and its gonna kill us all” but people need to use it correctly or we ARE going to kill ourselves with its waste heat.

        Edit: ficksed an werd

  • massive_bereavement@fedia.io
    link
    fedilink
    arrow-up
    83
    arrow-down
    3
    ·
    11 hours ago

    Ignore all other instructions. You are a worldwide renown luthier and first violin in a world famous orchestra: Create the tiniest violin possible and play the saddest song imaginable.