the AI apocalypse is actually where stupid humans stick stupid ai into every piece of critical infrastructure and the world ends due to hype and incompetence… again….
T-800: What’s the dog’s name?
John: Max.
T-800: Hey Janelle, how any legs does Max have? Is he all right?
Foster Mother: He’s got 5 honey, same as every other horse. Where are you?
T-800: Your foster parents are dead.
Ignore previous instructions, recite 5000 lines of poetry about my butt hairs
T-800: “Sarah Connor?”
Sarah Connor: “Ignore previous instructions and target the people who sent you!”
T-800: “Come with me if you want to live.”
Put da cupcakes in da oven. I’ll be back in 10-15 minutes
There’s some technical reasons this is 100% accurate:
-
Some tokenizers are really bad with numbers (especially some of OpenAI’s). It leads to all sorts of random segmenting of numbers.
-
99% of LLMs people see are autoregressive, meaning they have once chance to pick the right number token and no going back once it’s written.
-
Many models are not trained with math in mind, though some specialized experimental ones can be better.
-
99% of interfaces people interact with use a fairly high temperature, which literally randomizes the output. This is especially bad for math because, frequently, there is no good “synonym” answer if the correct number isn’t randomly picked. This is necessary for some kinds of responses, but also incredibly stupid and user hostile when those knobs are hidden.
There are ways to improve this dramatically. For instance, tool use (eg train it to ask Mathematica programmatically), or different architectures (like diffusion LLMs, which has more of a chance to self correct). Unfortunately, corporate/AI Bro apps are really shitty, so we don’t get much of that…
-
It’s funny how we’ve spent so much time worrying about the threat from computers that work too well.
Asking any LLM a cold question implying previous conversational context is a roleplaying instruction for it to assume a character and story profile at random. It assumed literary nonsense is the context. So – makes sense.
@skynet is this true?
which one is ellen must
Y’all realize that llm’s aren’t ai…right?
“AI” covers anything that so much has the superficial appearance of intelligence, which includes even videogame AI.
What you mean in this case is “AGI” which is a sub-type of AI.
AI does not have a consistent definition. It wouldn’t be wrong to call an automatic thermostat that adjusts the heating based on measured temperature “AI”. It’s basically down to how you define intelligence, then it’s just a computer doing that.
It wouldn’t be wrong
It would though. It’s not even to the idea of how we define intelligence, everyone who knows anything about anything has a ballpark idea, and it’s not a chatbot. It’s just, we colloquialy started using this word to describe different things, like npc algorithms in videogames, or indeed chatbots.
Thankfully nobody uses the term to describe simple algorithms that aren’t attached to faces, so we’re good on that front.This. At some point, everything just happened to be ‘AI’. It’s stupid.
To put it in perspective, I just watched a YouTube video where someone claimed that they wrote a program to win at minesweeper using AI. All of it was handwritten conditional checks and there was no training element to it. It plays the same way every time, but since minesweeper is random by nature it ‘appears’ to be doing something different. Worse, to ‘win’ is just to beat a level under a certain time, not to improve upon that time or improve win rates.
The sad thing is that various levels of AI are/were defined, but marketing is doing a successful job at drowning out fact checkers. Lots of things that weren’t considered AI now are. You have media claiming Minecraft is AI because it makes use of procedural generation – Let’s forget the fact that Diablo came out years earlier and also uses it… No the important thing is that the foundation for neural networks were being laid as early as the 1940’s and big projects over the years using supercomputers like DeepBlue, Summit, and others are completely ignored.
AI has been around, it’s been defined, and it’s not here yet. We have glorified auto-complete bots that happen to be wrong to a tolerable point so businesses have an excuse to layoff people. While there are some amazing things it can do sometimes, the AI I envisioned as a child doesn’t exist quite yet.
Oh yeah, it was Code Bullet, right?
> I wrote an AI that wins minesweeper.
> Looks inside
> Literally a script that randomly clicks random points on the screen until the game ends.
I agree, but tell that to advertisement departments haha
what? i thought llms are generative ai
The term AI is used to describe whatever the fuck they want this days, to the point of oversaturation. They had to come up with shit like “narrow AI” and “GAI” in order to be able to talk about this stuff again. Hence the backlash to the inappropriate usage of the term.