top | item 34757233

(no title)

wvoch235 | 3 years ago

Honestly, yes, I don’t think it’s that far off. LLMs are a series of relatively simple transformers chained together. Through which we can simulate thought to the point that it not only passes the Turing test but it’s useful.

This is a bit of extrapolation but I would say the reason why we’ve been unable to locate “consciousness” in the brain is because it’s the same thing. Relatively simple neurones, chained together, to create thought.

On a philosophical level: this doesn’t make any claims for idealism or materialism, “experience” could exist at a more fundamental level of reality than matter. But IMO that would mean that the LLM is “experiencing” as well.

discuss

order

Retric|3 years ago

The problem with that assessment is it doesn’t actually account for novel inputs.

Consider how you would respond to my question posed here vs a language model. https://news.ycombinator.com/item?id=34757366

Of course developers can always tack on edge cases, but ChatGPT can’t for example handle beating a novel MUD from the 80’s. This isn’t about diminishing it’s accomplishments, just pointing out why the creators aren’t hailing it as AGI.

Just look at it’s wonderful attempt to play chess: https://youtu.be/rSCNW1OCk_M