top | item 47137319

(no title)

fortyseven | 7 days ago

Hell, I don't even know for sure if I'm "conscious". When I really stop and think hard about it, the process of speaking or typing, word by word (even this!) is built on past experience. If you smack me on the head hard enough, and give me amnesia, there goes all that memory and suddenly I can't talk about the things that I could before. I would struggle and need to be exposed to new information (looking at it, reading up, being told about it, etc.) to be able to discuss it further. For me, that idea suggests there's a process that's not entirely different from a large language model. Not the same. But definitely makes me wonder if have more in common with them on some level and there isn't as much to the human mind as we think. For humans, maybe we're just more than the sum of our components.

discuss

order

adamzwasserman|7 days ago

The commonality breaks down at value assignment. You hear an unexpected sound and have a threat/delight assessment in 170ms. Faster than Google serves a first byte. You do this with virtually no data.

An LLM doesn't assign value to anything; it predicts tokens. The interesting question isn't whether we share a process with LLMs, it's whether the thing that makes your decisions matter to you, moral weight, spontaneous motivation, can emerge from a system that has no survival stake in its own outputs. I wrote about this a few years ago as "the consciousness gap": https://hackernoon.com/ai-and-the-consciousness-gap-lr4k3yg8