Freedom is the right to tell people what they do not want to hear.

  • George Orwell
  • 2 Posts
  • 24 Comments
Joined 3 months ago
cake
Cake day: July 17th, 2025

help-circle
  • Intelligence is a human-made term to describe an abstract phenomenon - it’s not a concrete thing but a spectrum. On one end of that spectrum is a rock: it doesn’t do anything, it just is. On the opposite end lies what we call superintelligence. Somewhere in between are things like a mouse, a sunflower, a human, a large language model, and a dolphin. All of these display some degree of intelligent behavior. It’s not that one is intelligent and another isn’t - it’s that some are more intelligent than others.

    While it’s true that we don’t fully understand how intelligence works, it’s false to say we don’t understand it at all or that we’re incapable of creating intelligent systems. The chess opponent on the Atari is an intelligent system. It can acquire, understand, and use information - which fits one of the common definitions of intelligence. It’s what we call a narrow intelligence, because its intelligence is limited to a single domain. It can play chess - and nothing else.

    Humans, on the other hand, are generally intelligent. Our intelligence spans multiple independent domains, and we can learn, reason, and adapt across them. We are, by our own definition, the benchmark for general intelligence. Once a system reaches human-level intelligence, it qualifies as generally intelligent. That’s not some cosmic law - it’s an arbitrary threshold we invented.

    The reason this benchmark matters is that once something becomes as intelligent as we are, it no longer needs us to improve itself. By definition, we have nothing more to offer it. We’ve raised the tiger cub to adulthood - and now it no longer needs us to feed it. It’s free to feed on us if it so desires.


  • You’re completely missing the point. It honestly sounds like you want them to keep pursuing AGI, because I can’t see any other reason why you’d be mocking the people arguing that we shouldn’t.

    How close to the nuclear bomb do researchers need to get before it’s time to hit the brakes? Is it really that unreasonable to suggest that maybe we shouldn’t even be trying in the first place? I don’t understand where this cynicism comes from. From my perspective, these people are actually on the same side as the anti-AI sentiment I see here every day - yet they’re still being ridiculed just for having the audacity to even consider that we might actually stumble upon AGI, and that doing so could be the last mistake humanity ever makes.









  • It feels like more and more people are taking pride in what they don’t do or what they aren’t, rather than in what they actually stand for. You see it in the people who brag about not using ChatGPT, not being on Twitter, not owning a car - or in those who define themselves by their hatred for others: the rich, meat eaters, republicans, whoever the current outgroup is.

    It’s like their entire identity revolves around opposition. Their sense of belonging comes not from shared values, but from shared resentment. If the only thing that unites you with your tribe is who you hate, then you don’t really have much of a tribe - just a mob.

    What’s most ironic is how many of these same people see themselves as independent thinkers, even though their views often are completely predictable once you know what they’re against. It’s as if they can’t even decide what they believe until they’ve first heard what “the enemy” thinks.


  • If you’re certain an “AI crash” is coming, then shorting AI companies is how you’d not only avoid the fallout but actually profit from it. That’s speculative investing though - basically gambling.

    For everyone else without the ability to predict the future, the general advice stays the same: invest in low-cost, highly diversified index funds spread across sectors and regions. The markets are deeply interconnected, so it doesn’t really matter where you’re invested - when the market crashes, you’re getting hit. If you’re all in on tech, you’ll get hit hard; if you’re spread out, you’ll get hit less. But either way, you’ll feel it.

    For someone in it for the long run, it doesn’t matter what the market’s doing. I just keep doing what I’ve always done - managing my finances carefully and investing my savings.


  • The main issue in Japan during the 90s was that the government refused to acknowledge the reality of the situation and let the market crash. Instead of allowing bankruptcies and bad loans to clear, they propped up banks and corporations for years - freezing growth and causing decades of deflation and stagnation. The real lesson from Japan isn’t about the crash itself, but about the response: avoiding short-term pain led to long-term paralysis.

    If an AI bubble bursts, it would probably resemble the dot-com crash more than Japan’s experience. Central banks act much faster now, bad debt gets cleared out instead of buried, and the global economy isn’t built entirely on AI speculation. So even if valuations take a hard hit, a decades-long depression like Japan’s is very unlikely.