

Try encasing it with three ` symbols one line above and below it, or you can use the \ backslash escape symbol before the formatting, that way people can see what it is you typed.
```
code block here


Try encasing it with three ` symbols one line above and below it, or you can use the \ backslash escape symbol before the formatting, that way people can see what it is you typed.
```
code block here


Whats the spoiler markup for your instance? Piefed.world still runs on 1.1.5 so theres no button to automate that.


Lmao nah servers in London
Idk how well Factorio hides users details once connected tho


Keep me in the loop if we plan those, I can probably make time.


{
"mods":
[
{
"name": "base",
"enabled": true
},
{
"name": "aai-containers",
"enabled": true
},
{
"name": "aai-industry",
"enabled": true
},
{
"name": "aai-vehicles-hauler",
"enabled": true
},
{
"name": "aai-vehicles-miner",
"enabled": true
},
{
"name": "aai-vehicles-warden",
"enabled": true
},
{
"name": "bobassembly",
"enabled": true
},
{
"name": "bobclasses",
"enabled": true
},
{
"name": "bobelectronics",
"enabled": true
},
{
"name": "bobequipment",
"enabled": true
},
{
"name": "bobgreenhouse",
"enabled": true
},
{
"name": "bobinserters",
"enabled": true
},
{
"name": "boblibrary",
"enabled": true
},
{
"name": "boblogistics",
"enabled": true
},
{
"name": "bobmining",
"enabled": true
},
{
"name": "bobmodules",
"enabled": true
},
{
"name": "bobores",
"enabled": true
},
{
"name": "bobplates",
"enabled": true
},
{
"name": "bobpower",
"enabled": true
},
{
"name": "bobrevamp",
"enabled": true
},
{
"name": "bobtech",
"enabled": true
},
{
"name": "bobvehicleequipment",
"enabled": true
},
{
"name": "bobwarfare",
"enabled": true
},
{
"name": "Bottleneck",
"enabled": true
},
{
"name": "bullet-trails",
"enabled": true
},
{
"name": "CleanFloor",
"enabled": true
},
{
"name": "factorissimo-2-notnotmelon",
"enabled": true
},
{
"name": "factoryplanner",
"enabled": true
},
{
"name": "far-reach",
"enabled": true
},
{
"name": "flib",
"enabled": true
},
{
"name": "FNEI",
"enabled": true
},
{
"name": "helmod",
"enabled": true
},
{
"name": "Milestones",
"enabled": true
},
{
"name": "RateCalculator",
"enabled": true
},
{
"name": "RPGsystem",
"enabled": true
},
{
"name": "rso-mod",
"enabled": true
},
{
"name": "show-max-underground-distance",
"enabled": true
},
{
"name": "squeak-through-2",
"enabled": true
},
{
"name": "StatsGui",
"enabled": true
},
{
"name": "Teleporters",
"enabled": true
},
{
"name": "textplates",
"enabled": true
},
{
"name": "Todo-List",
"enabled": true
},
{
"name": "valves",
"enabled": true
},
{
"name": "YARM",
"enabled": true
},
{
"name": "ZombieHorde",
"enabled": true
}
]
}


idk about 7 hours ago, but it’s the top comment
so
skill issue


LLMs are at a standstill since 2021, I would argue the current models were around in the late 80s they’re just using more compute time now, but it’s being marketed as the future to confuse a billion dopes like you who don’t understand technology. It’s the ultimate ponzi scheme, the companies are making no money but their evaluation keeps rising.
To clarify, OpenAI wrote a paper proving their model would not reach human output accuracy ever. They proved that the costs of gaining the same level of benefit from GPT3 to GPT4 as GPT2 to GPT3 would cost literally EXPONENTIAL amount of resources, which was proven again in practice when they actually did it a couple of years later. To improve it again would cost more power than mankind currently produces total, but the end result will still be hallucinating liability filled garbage because in 2022 Deepmind proved with LITERALLY INFINITE POWER AND TRAINING DATA that it would not reach human output, that the hard limit didn’t even reach the mid-90s.
You are arguing with the AI companies and researchers. Ya’ll need to understand that AI, as it is, is a fucking scam.
The paper from OpenAI: https://arxiv.org/pdf/2001.08361
The followup paper from DeepMind: https://arxiv.org/pdf/2203.15556


You literally don’t understand.
The human statements are the baseline, right or wrong, and the AI struggles to maintain numbers over 80% of that baseline.
Take however often a person is wrong and multiply it: that’s AI. They like to call it “hallucination” and it will never, ever, go away: in fact it will get worse as it has already polluted its own datasets which it will pull from and produce even worse output like noise coming from an amp in a feedback loop.


If you think a 2:7 ratio after insulting a bunch of net negative slopper subhumans is enough to change my mind then welcome to the internet, my friend. That’s a figure of speech btw, I am not some dirty slopper’s friend.


Was the sign accurate? Did it provide more than milk?


Removed by mod


“Hey AI, I want to do this very specific thing but I don’t really know what it is called, can you help me?”
That was your previous example. You had a very specific thing in mind, meaning you knew what to search for from reputable sources. There are tons of ways to discover new previously unknown things, all of which are better than being a filthy stupid slopper.
“Hey AI, can you please think for me? Please? I need it, idk what to do.”


wow thanks for that /s


And I explained why that makes them a moron.


I think theres a point where you have to realize the topic of discussion is about LLMs like ChatGPT, and that point was around the time we compared it to Web 3.0, something that people hate and associate with tech bros and evil corporations.
The meaning of words change based on context.


Unfortunately, an LLM lies about 1 in 5 to 1 in 10 times: 80% to 90% accuracy, with a proven hard limit by OpenAI and Deepmind research papers that state even with infinite power and resources it would never approach human language accuracy. Add on top of that the fact that the model is trained on human inputs which themselves are flawed, so you multiply an average person’s rate of being wrong.
In other words, you’re better off browsing forums and asking people, or finding books on the subject, because the AI is full of shit and you’re going to be one of those idiot sloppers everybody makes fun of, you won’t know jack shit and you’ll be confidently incorrect.


AI has no use. It only subtracts value and creates liabilities.


Granted, thus the third and final wish is spent.
Do not feel bad, the second wish was a turkey sandwich with no second meat option.


you want people to hang out with you?
Yes. You see how after I butchered your sentence the meaning fell apart? Because I cut out the conditional statement? Wasn’t that rude?
Which country do you live in?