Sure, we're also pattern matching, but additionally (among other things):
1) We're continually learning so we can update our predictions when our pattern matching is wrong
2) We're autonomous - continually interacting with the environment, and learning how it respond to our interaction
3) We have built in biases such as curiosity and boredom that drive us to experiment, gain new knowledge, and succeed in cases where "pre-training to date" would have failed us
For one, a brain can’t do anything without irreversibly changing itself in the process; our reasoning is not a pure function.
For a person to truly understand something they will have a well-refined (as defined by usefulness and correctness), malleable internal model of a system that can be tested against reality, and they must be aware of the limits of the knowledge this model can provide.
Alone, our language-oriented mental circuits are a thin, faulty conduit to our mental capacities; we make sense of words as they relate to mutable mental models, and not simply in latent concept-space. These models can exist in dedicated but still mutable circuitry such as the cerebellum, or they can exist as webs of association between sense-objects (which can be of the physical senses or of concepts, sense-objects produced by conscious thought).
So if we are pattern-matching, it is not simply of words, or of their meanings in relation to the whole text, or even of their meanings relative to all language ever produced. We translate words into problems, and match problems to models, and then we evaluate these internal models to produce perhaps competing solutions, and then we are challenged with verbalizing these solutions. If we were only reasoning in latent-space, there would be no significant difficulty in this last task.
At the end of the day, we're machines, too. I wrote a piece a few months ago with an intentionally provocative title, questioning whether we're truly on a different cognitive level.
I asked ChatGPT to help out: -----------------------------
"The distinction between AI and humans often comes down to the concept of understanding. You’re right to point out that both humans and AI engage in pattern matching to some extent, but the depth and nature of that process differ significantly."
"AI, like the model you're chatting with, is highly skilled at recognizing patterns in data, generating text, and predicting what comes next in a sequence based on the data it has seen. However, AI lacks a true understanding of the content it processes. Its "knowledge" is a result of statistical relationships between words, phrases, and concepts, not an awareness of their meaning or context"
HarHarVeryFunny|1 year ago
1) We're continually learning so we can update our predictions when our pattern matching is wrong
2) We're autonomous - continually interacting with the environment, and learning how it respond to our interaction
3) We have built in biases such as curiosity and boredom that drive us to experiment, gain new knowledge, and succeed in cases where "pre-training to date" would have failed us
bagful|1 year ago
For a person to truly understand something they will have a well-refined (as defined by usefulness and correctness), malleable internal model of a system that can be tested against reality, and they must be aware of the limits of the knowledge this model can provide.
Alone, our language-oriented mental circuits are a thin, faulty conduit to our mental capacities; we make sense of words as they relate to mutable mental models, and not simply in latent concept-space. These models can exist in dedicated but still mutable circuitry such as the cerebellum, or they can exist as webs of association between sense-objects (which can be of the physical senses or of concepts, sense-objects produced by conscious thought).
So if we are pattern-matching, it is not simply of words, or of their meanings in relation to the whole text, or even of their meanings relative to all language ever produced. We translate words into problems, and match problems to models, and then we evaluate these internal models to produce perhaps competing solutions, and then we are challenged with verbalizing these solutions. If we were only reasoning in latent-space, there would be no significant difficulty in this last task.
acjohnson55|1 year ago
https://acjay.com/2024/09/09/llms-think/
Retric|1 year ago
tomrod|1 year ago
AI can only interpolate. We may perceive it as extrapolation, but all LLMs architectures are fundamentally cleverly designed lossy compression
unknown|1 year ago
[deleted]
unknown|1 year ago
[deleted]
oh_my_goodness|1 year ago
"The distinction between AI and humans often comes down to the concept of understanding. You’re right to point out that both humans and AI engage in pattern matching to some extent, but the depth and nature of that process differ significantly." "AI, like the model you're chatting with, is highly skilled at recognizing patterns in data, generating text, and predicting what comes next in a sequence based on the data it has seen. However, AI lacks a true understanding of the content it processes. Its "knowledge" is a result of statistical relationships between words, phrases, and concepts, not an awareness of their meaning or context"
oh_my_goodness|1 year ago
:)