The thing that takes inputs gargles it together without thought and spits it out again can’t be intelligent. It’s literally not capable of it.
Now if you were to replicate the brain, sure, you could probably create something kinda „smart“. But we don’t know shit about our brain and evolution took thousands of years and humans are still insanely flawed.
Eh, probably not a few centuries. I could be, IDK, but I don’t think it makes sense to quantify like that.
We’re a few major breakthroughs away, and breakthroughs generally don’t happen all at once, they’re usually the product of tons of minor breakthroughs. If we put everyone a different their dog into R&D, we could dramatically increase the production of minor breakthroughs, and thereby reduce the time to AGI, but we aren’t doing that.
So yeah, maybe centuries, maybe decades, IDK. It’s hard to estimate the pace of research and what new obstacles we’ll find along the way that will need their own breakthroughs.
We are dozens of world-changing breakthroughs in the understanding of consciousness, sapience, sentience, and even more in computer and electrical engineering away from being able to even understand what the final product of an AGI development program would look like.
The major breakthroughs I’m talking about don’t necessarily involve consciousness/sentience, those would be required to replicate a human, which isn’t the mark. The target is to learn, create, and adapt like a human would. Current AI products merely produce results that are derivatives of human-generated data, and merely replicate existing work in similar contexts. If I ask an AI tool to tell me what’s needed to achieve AGI, it would reference whatever research has been fed into the model, not perform some new research.
AI tools like LLMs and image generation can feel human because they’re derivative of human work, a proper AGI solution probably wouldn’t feel human since it would work differently to achieve the same ends. It’s like using a machine learning program to optimize an algorithm vs a mathematician, they’ll use different methods and their solutions will look very different, but they’ll achieve the same end goals (i.e. come up with a very similar answer). Think of Data in Star Trek, he is portrayed as using very different methods to solve problems, but he’s just as effective if not more effective than his human counterparts.
Personally, I think solving quantum computing is needed to achieve AGI, whether we use quantum computing or not in the end result, because that involves creating a deterministic machine out of a probabilistic one, and that’s similar to how going from human brains (which I believe are probabilistic) to digital brains would likely work, just in reverse. And we’re quite far from solving quantum computers for any reasonable size of data. I’m guessing practical quantum computers are 20-50 years out, and AGI is probably even further, but if we’re able to make a breakthrough in the next 10 years for quantum computing, I’d revise my estimate for AGI downward.
The thing that takes inputs gargles it together without thought and spits it out again can’t be intelligent. It’s literally not capable of it. Now if you were to replicate the brain, sure, you could probably create something kinda „smart“. But we don’t know shit about our brain and evolution took thousands of years and humans are still insanely flawed.
Yup, AGI is terrifying; luckily it’s a few centuries off. The parlor-trick text predictor we have now is just bad for the environment and the economy.
Eh, probably not a few centuries. I could be, IDK, but I don’t think it makes sense to quantify like that.
We’re a few major breakthroughs away, and breakthroughs generally don’t happen all at once, they’re usually the product of tons of minor breakthroughs. If we put everyone a different their dog into R&D, we could dramatically increase the production of minor breakthroughs, and thereby reduce the time to AGI, but we aren’t doing that.
So yeah, maybe centuries, maybe decades, IDK. It’s hard to estimate the pace of research and what new obstacles we’ll find along the way that will need their own breakthroughs.
We are dozens of world-changing breakthroughs in the understanding of consciousness, sapience, sentience, and even more in computer and electrical engineering away from being able to even understand what the final product of an AGI development program would look like.
We are not anywhere near close to AGI.
That’s my point.
The major breakthroughs I’m talking about don’t necessarily involve consciousness/sentience, those would be required to replicate a human, which isn’t the mark. The target is to learn, create, and adapt like a human would. Current AI products merely produce results that are derivatives of human-generated data, and merely replicate existing work in similar contexts. If I ask an AI tool to tell me what’s needed to achieve AGI, it would reference whatever research has been fed into the model, not perform some new research.
AI tools like LLMs and image generation can feel human because they’re derivative of human work, a proper AGI solution probably wouldn’t feel human since it would work differently to achieve the same ends. It’s like using a machine learning program to optimize an algorithm vs a mathematician, they’ll use different methods and their solutions will look very different, but they’ll achieve the same end goals (i.e. come up with a very similar answer). Think of Data in Star Trek, he is portrayed as using very different methods to solve problems, but he’s just as effective if not more effective than his human counterparts.
Personally, I think solving quantum computing is needed to achieve AGI, whether we use quantum computing or not in the end result, because that involves creating a deterministic machine out of a probabilistic one, and that’s similar to how going from human brains (which I believe are probabilistic) to digital brains would likely work, just in reverse. And we’re quite far from solving quantum computers for any reasonable size of data. I’m guessing practical quantum computers are 20-50 years out, and AGI is probably even further, but if we’re able to make a breakthrough in the next 10 years for quantum computing, I’d revise my estimate for AGI downward.
Be ironic if it’s our great filter, boil the oceans for a text predictor