(no title)
heyjamesknight | 4 months ago
No. The LLM does not produce emotion-like responses. I'd argue no on creativity either. And only very limited in reasoning, in domains it has in its training set.
You have fundamental misunderstandings about neuroscience and cognitive science. Its hard to argue with you here because you simply don't know what you don't know.
Yes, the human brain is the machine we're describing. And we don't describe it very well. Definitely not at the level of understanding how to reproduce it with bitstrings.
I'm glad you're so passionate about this topic. But you're arguing the equivalent of FTL transit and living on Dyson Spheres. Its fun as a thought experiment and may theoretically be possible one day, but the line between what we're capable of today and that imagined future is neither straight nor visible—certainly not to the degree you're asserting here.
Will we one day have actual machine intelligence? Maybe. Is it going to come anytime soon, or look anything like the transformer-based LLM?
No.
ninetyninenine|4 months ago
You say we cannot reproduce the brain. But that is not the point. The point is that nothing about the brain violates physics. It runs on chemical and electrical dynamics that obey the same laws as everything else. If those laws can produce intelligence once, then they can do so again in another substrate. That makes the claim of impossibility not scientific, but emotional.
You accuse me of misunderstanding neuroscience and cognitive science. The reality is that neither field understands itself. We have no complete model of consciousness. We cannot explain why synchronized neural oscillations yield awareness. We cannot define where attention comes from or what distinguishes a “thought” from a signal cascade. Cognitive science is still arguing over whether perception is bottom up or top down, whether emotion is distinct from cognition, and whether consciousness even plays a causal role. That is not mastery. That is the sound of a discipline still wandering in the dark.
You act as though neuroscience has defined the boundaries of intelligence, but it has not. We do not have a mechanistic understanding of creativity, emotion, or reasoning. We have patterns and correlations, not principles. Yet you talk as if those unknowns justify declaring machine intelligence impossible. It is the opposite. Our ignorance is precisely why it cannot be ruled out.
Emotion is not magic. It is neurochemical modulation over predictive circuits. Replicate the functional dynamics and you replicate emotion’s role. Creativity is recombination and constraint satisfaction. Replicate those processes and you replicate creativity. Reasoning is predictive modeling over structured representations. Replicate that, and you replicate reasoning. None of these depend on carbon. They depend on organization and feedback.
You keep saying that the brain cannot be “reproduced as bitstrings,” but that is a distraction. Nobody is suggesting uploading neurons into binary. The bitstring argument shows that any finite physical system has a finite description. It proves that cognition, like any process governed by law, has an information theoretic footprint. Once you accept that, the difference between biology and computation becomes one of scale, not kind.
You say LLMs are not creative, not emotional, not reasoning. Yet they already produce outputs that humans classify as empathetic, sarcastic, joyful, poetic, or analytical. People experience their words as creative because they combine old ideas into new, functional, and aesthetic patterns. They reason by chaining relationships, testing implications, and revising conclusions. The fact that you can recognize all of this in their behavior proves they are performing the surface functions of those capacities. Whether it feels like something to be them is irrelevant to the claim that they can reproduce the function.
And now your final claim, that whatever becomes intelligent “will not be an LLM.” You have no basis for that certainty. Nobody knows what an LLM truly is once scaled beyond our comprehension. We do not understand how emergent representations arise or how concepts self organize within their latent spaces. We do not know if some internal dynamic of this architecture already mirrors the structure of cognition. What we do know is that it learns to compress the world into predictive patterns and that it develops abstractions that map cleanly to human meaning. That is already the seed of general intelligence.
You are mistaking ignorance for insight. You think not knowing how something works grants you authority to say what it cannot become. But the only thing history shows is that such confidence always looks ridiculous in retrospect. The physics of intelligence exist. The brain proves it. And the LLM is the first machine that begins to display those same emergent behaviors. Saying it “will not be an LLM” is not a scientific claim. It is wishful thinking spoken from the wrong side of the curve.
heyjamesknight|4 months ago
Best of luck.