The major breakthroughs I’m talking about don’t necessarily involve consciousness/sentience, those would be required to replicate a human, which isn’t the mark. The target is to learn, create, and adapt like a human would. Current AI products merely produce results that are derivatives of human-generated data, and merely replicate existing work in similar contexts. If I ask an AI tool to tell me what’s needed to achieve AGI, it would reference whatever research has been fed into the model, not perform some new research.
AI tools like LLMs and image generation can feel human because they’re derivative of human work, a proper AGI solution probably wouldn’t feel human since it would work differently to achieve the same ends. It’s like using a machine learning program to optimize an algorithm vs a mathematician, they’ll use different methods and their solutions will look very different, but they’ll achieve the same end goals (i.e. come up with a very similar answer). Think of Data in Star Trek, he is portrayed as using very different methods to solve problems, but he’s just as effective if not more effective than his human counterparts.
Personally, I think solving quantum computing is needed to achieve AGI, whether we use quantum computing or not in the end result, because that involves creating a deterministic machine out of a probabilistic one, and that’s similar to how going from human brains (which I believe are probabilistic) to digital brains would likely work, just in reverse. And we’re quite far from solving quantum computers for any reasonable size of data. I’m guessing practical quantum computers are 20-50 years out, and AGI is probably even further, but if we’re able to make a breakthrough in the next 10 years for quantum computing, I’d revise my estimate for AGI downward.
That’s my point.
The major breakthroughs I’m talking about don’t necessarily involve consciousness/sentience, those would be required to replicate a human, which isn’t the mark. The target is to learn, create, and adapt like a human would. Current AI products merely produce results that are derivatives of human-generated data, and merely replicate existing work in similar contexts. If I ask an AI tool to tell me what’s needed to achieve AGI, it would reference whatever research has been fed into the model, not perform some new research.
AI tools like LLMs and image generation can feel human because they’re derivative of human work, a proper AGI solution probably wouldn’t feel human since it would work differently to achieve the same ends. It’s like using a machine learning program to optimize an algorithm vs a mathematician, they’ll use different methods and their solutions will look very different, but they’ll achieve the same end goals (i.e. come up with a very similar answer). Think of Data in Star Trek, he is portrayed as using very different methods to solve problems, but he’s just as effective if not more effective than his human counterparts.
Personally, I think solving quantum computing is needed to achieve AGI, whether we use quantum computing or not in the end result, because that involves creating a deterministic machine out of a probabilistic one, and that’s similar to how going from human brains (which I believe are probabilistic) to digital brains would likely work, just in reverse. And we’re quite far from solving quantum computers for any reasonable size of data. I’m guessing practical quantum computers are 20-50 years out, and AGI is probably even further, but if we’re able to make a breakthrough in the next 10 years for quantum computing, I’d revise my estimate for AGI downward.