Cooper always tries to walk it in.
Cooper always tries to walk it in.
Coffee is the same for me. Beer used to be the same, but I got used to it - still doesn’t taste great, but cold it’s acceptable. But not coffee, I never got used to it.
Yes, the first example does the same thing, but there’s still less to mentally parse. Ideally you should just use if len(mylist) == 0:
.
This is honestly the worst version regarding readability. Don’t rely on implicit coercion, people.
Nintendo is a shitty company and companies in general do shitty anti consumer things, but passing along tariff costs isn’t one of these.
The sky color is part of the training data. How did the LLMs include the training data before it existed?
No, Europeans
I’m training, but still seem to be far away from escape velocity.
Maybe if we could catapult me at the point of climax?
The answer is disappointingly simple: emotional satisfaction.
For decades, these people have been told that they are incredibly generous towards their allies, and that they get nothing in return. That their allies are abusing their relationships. Of course this is false, but they’ve been told so every day.
Now they get to abuse their “abusers” right back.
My god.
There are many parameters that you set before training a new model, one of which (simplified) is the size of the model, or (roughly) the number of neurons. There isn’t any natural lower or upper bound for the size, instead you choose it based on the hardware you want to run the model on.
Now the promise from OpenAI (from their many papers, and press releases, and …) was that we’ll be able to reach AGI by scaling. Part of the reason why Microsoft invested so much money into OpenAI was their promise of far greater capabilities for the models, given enough hardware. Microsoft wanted to build a moat.
Now, through DeepSeek, you can scale even further with that hardware. If Microsoft really thought OpenAI could reach ChatGPT 5, 6 or whatever through scaling, they’d keep the GPUs for themselves to widen their moat.
But they’re not doing that, instead they’re scaling back their investments, even though more advanced models will most likely still use more hardware on average. Don’t forget that there are many players in this field that keep bushing the bounds. If ChatGPT 4.5 is any indication, they’ll have to scale up massively to keep any advantage compared to the market. But they’re not doing that.
But really the “game” is the model. Throwing more hardware at the same model is like throwing more hardware at the same game.
No, it’s not! AI models are supposed to scale. When you throw more hardware at them, they are supposed to develop new abilities. A game doesn’t get a new level because you’re increasing the resolution.
At this point, you either have a fundamental misunderstanding of AI models, or you’re trolling.
I’m supposed to be able to take a model architecture from today, scale it up 100x and get an improvement. I can’t make the settings in Crysis 100x higher than they can go.
Games always have a limit, AI is supposed to get better with scale. Which part do you not understand?
It’s still not a valid comparison. We’re not talking about diminished returns, we’re talking about an actual ceiling. There are only so many options implemented in games - once they’re maxed out, you can’t go higher.
That’s not the situation we have with AI, it’s supposed to scale indefinitely.
If a new driver came out that gave Nvidia 5090 performance to games with gtx1080 equivalent hardware would you still buy a new video card this year?
It doesn’t make any sense to compare games and AI. Games have a well-defined upper bound for performance. Even Crysis has “maximum settings” that you can’t go above. Supposedly, this doesn’t hold true for AI, scaling it should continually improve it.
So: yes, in your analogy, MS would still buy a new video card this year if they believed in the progress being possible and reasonably likely.
I’m gonna disagree - it’s not like DeepSeek uncovered some upper limit to how much compute you can throw at the problem. More efficient hardware use should be amazing for AI since it allows you to scale even further.
This means that MS isn’t expecting these data centers to generate enough revenue to be profitable, and they’re not willing to bet on further advancements that might make them profitable. In other words, MS doesn’t have a positive outlook for AI.
I say we still do it, for good luck
What an utterly ridiculous notion. Obviously it’s a magical battery that, once charged, can be inserted into an ancient titan robot to power it back up.
Very understandable! I’ve also stopped drinking beer, because it never started tasting good.