What is it about large language models that makes otherwise intelligent and curious people assign them these magical properties. There's no evidence, at all, that we're on the path to AGI. The very idea that non-biological consciousness is even possible is an unknown. Yet we've seen these statistical language models spit out convincing text and people fall over themselves to conclude that we're on the path to sentience.
nytesky|4 months ago
curiouscube|4 months ago
There are two conclusions you can draw: Either the machines are conscious, or they aren't.
If they aren't, you need a really good argument that shows how they differ from humans or you can take the opposite route and question the consciousness of most humans.
Since I neither heard any really convincing arguments besides "their consciousness takes a form that is different from ours so it's not conscious" and I do think other humans are conscious, I currently hold the opinion that they are conscious.
(Consciousness does not actually mean you have to fully respect them as autonomous beings with a right to live, as even wanting to exist is something different from consciousness itself. I think something can be conscious and have no interest in its continued existence and that's okay)
lowsong|4 months ago
No, their output can mimic language patterns.
> If they aren't, you need a really good argument that shows how they differ from humans or you can take the opposite route and question the consciousness of most humans.
The burden of proof is firmly on the side of proving they are conscious.
> I currently hold the opinion that they are conscious.
There is no question, at all, that the current models are not conscious, the question is “could this path of development lead to one that is”. If you are genuinely ascribing consciousness to them, then you are seeing faces in clouds.
estimator7292|4 months ago
The fact that LLMs are really not fit for AGI is a technical detail divorced from the feelings about LLMs. You have to be a pretty technical person to understand AI enough to know that. LLMs as AGI is what people are being sold. There's mass economic hysteria about LLMs, and rationality left the equation a long time ago.
unknown|4 months ago
[deleted]
anonzzzies|4 months ago
I mean, we went from worthless chatbots which basically pattern matched to me waiting for a plane and seeing a fairly large amount of people charting to chatgpt, not insta, whatsapp etc. Or sitting in a plane next to a person who is using local ollama in cursor to code and brainstorm. This took us about 10 years to go from some ideas that no one but scientists could use to stuff everyone uses. And many people already find human enough. What in 100 years?