

You’re completely missing the point. It honestly sounds like you want them to keep pursuing AGI, because I can’t see any other reason why you’d be mocking the people arguing that we shouldn’t.
How close to the nuclear bomb do researchers need to get before it’s time to hit the brakes? Is it really that unreasonable to suggest that maybe we shouldn’t even be trying in the first place? I don’t understand where this cynicism comes from. From my perspective, these people are actually on the same side as the anti-AI sentiment I see here every day - yet they’re still being ridiculed just for having the audacity to even consider that we might actually stumble upon AGI, and that doing so could be the last mistake humanity ever makes.



Intelligence is a human-made term to describe an abstract phenomenon - it’s not a concrete thing but a spectrum. On one end of that spectrum is a rock: it doesn’t do anything, it just is. On the opposite end lies what we call superintelligence. Somewhere in between are things like a mouse, a sunflower, a human, a large language model, and a dolphin. All of these display some degree of intelligent behavior. It’s not that one is intelligent and another isn’t - it’s that some are more intelligent than others.
While it’s true that we don’t fully understand how intelligence works, it’s false to say we don’t understand it at all or that we’re incapable of creating intelligent systems. The chess opponent on the Atari is an intelligent system. It can acquire, understand, and use information - which fits one of the common definitions of intelligence. It’s what we call a narrow intelligence, because its intelligence is limited to a single domain. It can play chess - and nothing else.
Humans, on the other hand, are generally intelligent. Our intelligence spans multiple independent domains, and we can learn, reason, and adapt across them. We are, by our own definition, the benchmark for general intelligence. Once a system reaches human-level intelligence, it qualifies as generally intelligent. That’s not some cosmic law - it’s an arbitrary threshold we invented.
The reason this benchmark matters is that once something becomes as intelligent as we are, it no longer needs us to improve itself. By definition, we have nothing more to offer it. We’ve raised the tiger cub to adulthood - and now it no longer needs us to feed it. It’s free to feed on us if it so desires.