top | item 45918400

(no title)

CrackerNews | 3 months ago

Neural nets and LLMs were created based on neuroscience research. Ultimately they are approximations of how parts of the human brain works.

The real concern of having no biomechanical skin in the game is lacking sensory input that could ground it within our reality. All input into LLMs are based on digital output of human labor, which are ultimately symbolic representations filtered through our brain and its ideas of reality. However, this may not be too different from how our real human brains work.

There has been a philosophical dilemma over how real consciousness can be as if it is imagined by our brains since our brains provide convincing hallucination of what seems like real sensory input or even free will. That is to say that humans at a philosophical level live in their brains interpreting a fragment of reality based upon how it interprets sensory input.

Now the LLM as a brain cuts out an entire step of agentic sensory input and they exist wholly as the result of our ideas.

discuss

order

Marshferm|3 months ago

They have no functional or processual relationship to brains, there are scores of papers making light of this. There are no valid parallels between AI and brains.

There were never approximations merely false models.

The field is trapped in bad definitions and decisions

https://pubmed.ncbi.nlm.nih.gov/37863713/

CrackerNews|3 months ago

The keyword of that study is consciousness, which I'd consider a separate goal than an "intelligence". LLM proponents are aware that their architecture lacks many parts of what constitutes a complete brain, and there's other AI researchers who disagree that LLMs will lead to either AGI or consciousness. I largely consider these tangential to the topic. A neural net simulation of a virtual reality does not need consciousness as it has to model the consequences of agentic actions.