Aren't human stochastic parrots in the end? I mean, when we "learn", don't we model our internal stochastic functions? Whether it is walking, learning a language, or anything else.
If I asked you what 5+3+9 is, then you wouldn't be allowed to calculate the intermediate values inside your head. Is it really that hard to believe that humans have internal thoughts and that they think before they speak? Is it really such a revelation that I have to remind you of it?
Creating a small group of bot 'personalities' that have an internal dialog, generating and sharing intermediate values before coming to a consensus and issuing a response to a user is trivial. I did it in my earliest experiments with GPT-3
You could use the same framework to generate an internal dialog for a bot.
A lot of people don't think before they speak. If you tell me you have a small conversation with yourself before each thing you say out loud during a conversation, I will have doubts. Quick wit and fast paced conversation do not leave time for any real internal narration, just "stream of consciousness".
There is a time for carefully choosing and reflecting on your words, surely, but there are many times staying in tune with a real time conversation takes precedence.
Check out you.com genuius mode, it does internal dialogue of sorts, which you can open up and explore. The same is true for many "agent" based systems. It turns out giving LLMs the ability to talk through problems with themsleves massivly improves their abilities. Same as using chain of thought prompting.
No, that's absurdly reductive. You might as well say "aren't humans just calculators made of meat in the end?". If you append "in the end" to any analogy you'll find some people that are willing to stretch to fit the analogy because they like it.
If you've ever had a conversation with a toddler, they do sound a bit like stochastic parrots. It takes us a while to be able to talk coherently. The learning process in schools involves a lot of repetition. From learning the abc to mastering calculus.
Toddlers are just learning the building blocks of language. You could make the same statement about any new skill. However, at some point, most humans gain the ability to take two concepts they have heard about before and create a third concept that they have never encountered. You can also get that with artificial neural networks, but it is fundamentally impossible with n-grams.
No, because we are able to extrapolate from our experience. The ability to synthesize something coherent that doesn’t map directly into our training set is a major difference between human intelligence and what we call AI today.
Isn’t there an argument we’re simply better at brain statistics and modeling than current AI? Forget architectural limitations. What is the nature of the extrapolation? How do individuals balance their experiences and determine likely outcomes?
The overwhelming majority of human advancements is interpolation. Extrapolation is rare and we tend to only realize something was extrapolation after the fact.
"extrapolate from our experience"
"synthesize something coherent"
These are non-scientific concepts. You are basically saying "humans are doing something more, but we can't really explain it".
That assumption is getting weaker by the day. Our entire existence is a single, linear, time sequence data set. Am I "extrapolating from my experience" when I decide to scratch my head? No, I got a sequential data point of an "itch" and my reward programming has learned to output "scratch".
imtringued|1 year ago
knome|1 year ago
You could use the same framework to generate an internal dialog for a bot.
A lot of people don't think before they speak. If you tell me you have a small conversation with yourself before each thing you say out loud during a conversation, I will have doubts. Quick wit and fast paced conversation do not leave time for any real internal narration, just "stream of consciousness".
There is a time for carefully choosing and reflecting on your words, surely, but there are many times staying in tune with a real time conversation takes precedence.
Grimblewald|1 year ago
root_axis|1 year ago
margalabargala|1 year ago
jillesvangurp|1 year ago
sigmoid10|1 year ago
krainboltgreene|1 year ago
fnordpiglet|1 year ago
alexeldeib|1 year ago
Der_Einzige|1 year ago
strangescript|1 year ago
These are non-scientific concepts. You are basically saying "humans are doing something more, but we can't really explain it".
That assumption is getting weaker by the day. Our entire existence is a single, linear, time sequence data set. Am I "extrapolating from my experience" when I decide to scratch my head? No, I got a sequential data point of an "itch" and my reward programming has learned to output "scratch".