(no title)
shock-value | 2 years ago
It’s clear that at the least, they can decipher very numerous patterns across a wide range of conceptual depths — it’s an architectural advance easily on the the level of the convolutional neural network, if not even more profound. The idea that NLP is “solved” isn’t a crazy notion, though I won’t take a side on that.
That said, it’s equally obvious that they are not AGI unless you have a really uninspired and self-limiting definition of AGI. They are purely feedforward aside from the single generated token that becomes part of the input to the next iteration. Multimodality has not been incorporated (aside from possibly a limited form in GPT-4). Real-world decision-making and agency is entirely outside the bounds of what these models can conceive or act towards.
Effectively and by design these models are computational behemoths trained to do one singular task only — wring a large textual input though an enormous interconnected web of calculations purely in service of distilling everything down to a single word as output, a hopefully plausible guess at what’s next given what’s been seen.
famouswaffles|2 years ago
And you want to know the crazier thing? Evidently a lot of researchers feel similarly too.
General Purpose Technologies ( from the Jobs Paper), General Artificial Intelligence (from the creativity paper). Want to know the original title of the recent Microsoft paper ? "First contact with an AGI system".
The skirting around the word that is now happening is insanely funny. Look at the last one. Fuck, they just switched the word order. Nobody wants to call a spade a spade yet but it's obvious people are figuring it out.
I can you show you output that clearly demonstrates understanding and reasoning. That's not the problem. The problem is that when I do, the argument Quickly shifts to "it's not true understanding!" What a bizzare argument.
This is the fallacy of the philosophical zombie. Somehow there is this extra special distinction between two things and yet you can't actually show it. You can't test for so called huge distinction. A distinction that can't be tested for is not a distinction.
The intelligence arguments are also stupid because they miss the point entirely.
What matters is that the plane still flies, the car still drives and the boat still sails. For the people who are now salivating at their potential, or dreading the possibility of being made redundant by them, these large language models are already intelligent enough to matter.
shock-value|2 years ago
I'm definitely not contesting that.
I've always considered the idea of "AGI" to mean something of the holy grail of machine learning -- the point at which there is no real point in pursuing further advances in artificial intelligence because the AI itself will discover and apply such augmentations using its own capabilities.
I have seen no evidence that these transformer models would be able to do this, but if the current models can do so do then perhaps I will eat my words. (Doing this would likely mean that GPT-4 would need to propose, implement, and empirically test some fundamental architectural advancements in both multimodal and reinforcement learning.)
By the way, many researchers are equally convinced that these models are in fact not AGI -- that includes the head of OpenAI.
behringer|2 years ago
I'm an old hat hobby programmer that played around with ai demos back in the mid to late 90s and 2000s and chatgpt is nothing like any ai I've ever seen before.
It absolutely can appear to reason especially if you manipulate it out of its safety controls.
I don't know what it's doing to cause such compelling output, but it's certainly not just recursively spitting out good words to use next.
That said, there are fundamental problems with chatgpt's understanding of reality, which is to say it's about as knowledgeable as a box of rocks. Or perhaps a better analogy is about as smart as a room sized pile of loose papers.
But knowing about reality and reasoning are two very different things.
I'm excited to see where things go from here.