We've had markov chain generators for a while, having enough computing power to grant them the power to regurgitate wikipedia reddit and stackoverflow content is not "a huge step towards agi"
It's true that Markov chain generators have existed for years. But historically their output was usually just this cute thing that gave you a chuckle; they were seldomly as useful in a general sense like LLMs currently are. I think that the increase you mention in compute power and data is itself a huge step forward.
But also transformers have been super important. Transformer-based LLMs are orders of magnitude more powerful, smarter, trained on more data, etc. than previous types of models because of how they can scale. The attention mechanism also allows them to pay attention to way more of the input, not just the few preceding tokens.
If you want something useful, then we're getting closer.
AGI is something specific, as a requisite, it must understand what is being asked, and what we have now is a puppet show that makes us humans think that the machine is thinking, similar to Markov chains.
There is absolutely some utility in this- but it's about as close to AGI as the horse-cart is to commercial aircraft.
Some AI hype people are really uncomfortable with that fact, I'm sorry, but that reality will hit you sooner rather than later.
It does not mean what we have is perfect, cannot be improved in the short term, or that it has no practical applications already.
EDIT: downvoting me wont change this, go study the field of academic AI properly please
Jabrov|1 year ago
It's true that Markov chain generators have existed for years. But historically their output was usually just this cute thing that gave you a chuckle; they were seldomly as useful in a general sense like LLMs currently are. I think that the increase you mention in compute power and data is itself a huge step forward.
But also transformers have been super important. Transformer-based LLMs are orders of magnitude more powerful, smarter, trained on more data, etc. than previous types of models because of how they can scale. The attention mechanism also allows them to pay attention to way more of the input, not just the few preceding tokens.
dijit|1 year ago
If you want something useful, then we're getting closer.
AGI is something specific, as a requisite, it must understand what is being asked, and what we have now is a puppet show that makes us humans think that the machine is thinking, similar to Markov chains.
There is absolutely some utility in this- but it's about as close to AGI as the horse-cart is to commercial aircraft.
Some AI hype people are really uncomfortable with that fact, I'm sorry, but that reality will hit you sooner rather than later.
It does not mean what we have is perfect, cannot be improved in the short term, or that it has no practical applications already.
EDIT: downvoting me wont change this, go study the field of academic AI properly please