Many people don't think we have any good evidence that our brains aren't essentially the same thing: a stochastic statistical model that produces outputs based on inputs.
Of course, you're right. Neural networks mimic exactly that after all. I'm certain we'll see an ML model developed someday that fully mimics the human brain. But my point is an LLM isn't that; it's a language model only. I know it can seem intelligent sometimes, but it's important to understand what it's actually doing and not ascribe feelings to it that don't exist in reality.
Too many people these days are forgetting this key point and putting a dangerous amount of faith in ChatGPT etc. as a result. I've seen DOCTORS using ChatGPT for diagnosis. Ignorance is scary.
Do biologists and neuroscientists not have any good evidence or is that just computer scientists and engineers speaking outside of their field of expertise? There's always been this danger of taking computer and brain comparisons too literally.
If you're willing to torture the analogy you can find a way to describe literally anything as a system of outputs based on inputs. In the case of the brain to LLM comparison, people are inclined to do it because they're eager to anthropomorophize something that produces text they can interpret as a speaker, but it's totally incorrect to suggest that our brains are "essentially the same thing" as LLMs. The comparison is specious even on a surface level. It's like saying that birds and planes are "essentially the same thing" because flight was achieved by modeling planes after birds.
For example, they are dismal at math problems that aren't just slight variations of problems they've seen before.
Here's one by blackandredpenn where ChatGPT insisted the solution to problem that could be solved by high school / talented middle school students was correct, even after trying to convince it it was wrong. https://youtu.be/V0jhP7giYVY?si=sDE2a4w7WpNwp6zU&t=837
> For example, they are dismal at math problems that aren't just slight variations of problems they've seen before.
I know plenty of teachers who would describe their students the exact same way. The difference is mostly one of magnitude (of delta in competence), not quality.
Also, I think it's important to note that by "could be solved by high school / talented middle school students" you mean "specifically designed to challenge the top ~1% of them". Because if you say "LLMs only manage to beat 99% of middle schoolers at math", the claim seems a whole lot different.
mubou|10 months ago
Too many people these days are forgetting this key point and putting a dangerous amount of faith in ChatGPT etc. as a result. I've seen DOCTORS using ChatGPT for diagnosis. Ignorance is scary.
goatlover|10 months ago
root_axis|10 months ago
SJC_Hacker|10 months ago
But that 1% is pretty important.
For example, they are dismal at math problems that aren't just slight variations of problems they've seen before.
Here's one by blackandredpenn where ChatGPT insisted the solution to problem that could be solved by high school / talented middle school students was correct, even after trying to convince it it was wrong. https://youtu.be/V0jhP7giYVY?si=sDE2a4w7WpNwp6zU&t=837
Rewind earlier to see the real answer
LordDragonfang|10 months ago
I know plenty of teachers who would describe their students the exact same way. The difference is mostly one of magnitude (of delta in competence), not quality.
Also, I think it's important to note that by "could be solved by high school / talented middle school students" you mean "specifically designed to challenge the top ~1% of them". Because if you say "LLMs only manage to beat 99% of middle schoolers at math", the claim seems a whole lot different.
jquery|10 months ago
https://chatgpt.com/share/67f40cd2-d088-8008-acd5-fe9a9784f3...
nativeit|10 months ago