• Karkitoo@lemmy.ml
    link
    fedilink
    English
    arrow-up
    72
    ·
    2 months ago

    QThey were designed to behave so.

    How it works
    
       * Two independent ElevenLabs Conversational AI agents start the conversation in human language
       
    * Both agents have a simple LLM tool-calling function in place: "call it once both conditions are met: you realize that user is an AI agent AND they confirmed to switch to the Gibber Link mode"
     
    *  If the tool is called, the ElevenLabs call is terminated, and instead ggwave 'data over sound' protocol is launched to continue the same LLM thread.
    
    
  • shortrounddev@lemmy.world
    link
    fedilink
    English
    arrow-up
    65
    ·
    2 months ago

    > it’s 2150

    > the last humans have gone underground, fighting against the machines which have destroyed the surface

    > a t-1000 disguised as my brother walks into camp

    > the dogs go crazy

    > point my plasma rifle at him

    > “i am also a terminator! would you like to switch to gibberlink mode?”

    > he makes a screech like a dial up modem

    > I shed a tear as I vaporize my brother

    • Dasus@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      2 months ago

      I’d prefer my brothers to be LLM’s. Genuinely it’d be an improvement on their output expressiveness and logic.

      Ours isn’t a great family.

  • patatahooligan@lemmy.world
    link
    fedilink
    English
    arrow-up
    47
    ·
    edit-2
    2 months ago

    This is really funny to me. If you keep optimizing this process you’ll eventually completely remove the AI parts. Really shows how some of the pains AI claims to solve are self-inflicted. A good UI would have allowed the user to make this transaction in the same time it took to give the AI its initial instructions.

    On this topic, here’s another common anti-pattern that I’m waiting for people to realize is insane and do something about it:

    • person A needs to convey an idea/proposal
    • they write a short but complete technical specification for it
    • it doesn’t comply with some arbitrary standard/expectation so they tell an AI to expand the text
    • the AI can’t add any real information, it just spreads the same information over more text
    • person B receives the text and is annoyed at how verbose it is
    • they tell an AI to summarize it
    • they get something that basically aims to be the original text, but it’s been passed through an unreliable hallucinating energy-inefficient channel

    Based on true stories.

    The above is not to say that every AI use case is made up or that the demo in the video isn’t cool. It’s also not a problem exclusive to AI. This is a more general observation that people don’t question the sanity of interfaces enough, even when it costs them a lot of extra work to comply with it.

    • FauxLiving@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      A good UI would have allowed the user to make this transaction in the same time it took to give the AI its initial instructions.

      Maybe, but by the 2nd call the AI would be more time efficient and if there were 20 venues to check, the person is now saving hours of their time.

      • jj4211@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        But we already have ways to search an entire city of hotels for booking, much much faster even than this one conversation would be.

        Even if going with agents, why in the world would it be over a voice line instead of data?

        • FauxLiving@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          The same reason that humanoid robots are useful even though we have purpose built robots: The world is designed with humans in mind.

          Sure, there are many different websites that solve the problem. But each of them solve it in a different way and each of them require a different way of interfacing with them. However, they all are built to be interfaced with by humans. So if you create AI/robots with the ability to operate like a human, then they are automatically given access to massive amounts of pre-made infrastructure for free.

          You don’t need special robot lifts in your apartment building if the cleaning robots can just take the elevators. You don’t need to design APIs for scripts to access your website if the AI can just use a browser with a mouse and keyboard.

          • jj4211@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            2 months ago

            The same reason that humanoid robots are useful

            Sex?

            The thing about this demonstration is that there’s a wide recognition that even humans don’t want to be forced to voice interactions, and this is a ridiculous scenario that resembles what the 50s might have imagined the future as being, while ignoring the better advances made along the way. Conversational is maddening way to get a lot of things done, particularly scheduling. So in this demo, a human had to conversationally tell an AI agent the requirements, and then an AI agent acoustically couples to another AI agent which actually has access to the actual scheduling system.

            So first, the coupling is stupid. If they recognize, then spout an API endpoint at the other end and take the conversation over IP.

            But the concept of two AI agents negotiating this is silly. If the user AI agent is in play, just let it access the system directly that the other agent is accessing. An AI agent may be able to efficiently facilitate this, but two only makes things less likely to work than one.

            You don’t need special robot lifts in your apartment building if the cleaning robots can just take the elevators.

            The cleaning robots even if not human shaped could easily take the normal elevators unless you got very weird in design. There’s a significantly good point that obsession with human styled robotics gets in the way of a lot of use cases.

            You don’t need to design APIs for scripts to access your website if the AI can just use a browser with a mouse and keyboard.

            The API access would greatly accelerate things even for AI. If you’ve ever done selenium based automation of a site, you know it’s so much slower and heavyweight than just interacting with the API directly. AI won’t speed this up. What should take a fraction of a second can turn into many minutes,and a large number of tokens at large enough scale (e.g. scraping a few hundred business web uis).

  • Rob T Firefly@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    2 months ago

    And before you know it, the helpful AI has booked an event where Boris and his new spouse can eat pizza with glue in it and swallow rocks for dessert.

  • Lightening@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    2 months ago

    Did this guy just inadvertently create dial up internet or ACH phone payment system?

  • rtxn@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    2 months ago

    When I said I wanted to live in Mass Effect’s universe, I meant faster-than-light travel and sexy blue aliens, not the rise of the fucking geth.

    • latenightnoir@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 months ago

      Don’t forget, though, the Geth pretty much defended themselves without even having time to understand what was happening.

      Imagine suddenly gaining both sentience and awareness, and the first thing which your creators and masters do is try to destroy you.

      To drive this home even further, even the “evil” Geth who sided with the Reapers were essentially indoctrinated themselves. In ME2, Legion basically overwrites corrupted files with stable/baseline versions.

      • rtxn@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        Not the point. I’m bringing up the geth because they also communicate data over sound.

  • raef@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 months ago

    How much faster was it? I was reading along with the gibber and not losing any time

  • thefactremains@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    4
    ·
    edit-2
    2 months ago

    This is dumb. Sorry.

    Instead of doing the work to integrate this, do the work to publish your agent’s data source in a format like anthropic’s model context protocol.

    That would be 1000 times more efficient and the same amount (or less) of effort.