top | item 45766939

(no title)

lowsong | 4 months ago

What is it about large language models that makes otherwise intelligent and curious people assign them these magical properties. There's no evidence, at all, that we're on the path to AGI. The very idea that non-biological consciousness is even possible is an unknown. Yet we've seen these statistical language models spit out convincing text and people fall over themselves to conclude that we're on the path to sentience.

discuss

order

nytesky|4 months ago

We don’t understand our own consciousness first off. Second, like the old saying, sufficiently advanced science will be indistinguishable from magic, if it is completely convincing as agi, even if we skeptical of its methods, how can we know it isn’t?

curiouscube|4 months ago

I think we can all agree that LLMs can mimick consciousness to the point that it is hard for most people to discern them from humans. Like the turing test isn't even really discussed anymore.

There are two conclusions you can draw: Either the machines are conscious, or they aren't.

If they aren't, you need a really good argument that shows how they differ from humans or you can take the opposite route and question the consciousness of most humans.

Since I neither heard any really convincing arguments besides "their consciousness takes a form that is different from ours so it's not conscious" and I do think other humans are conscious, I currently hold the opinion that they are conscious.

(Consciousness does not actually mean you have to fully respect them as autonomous beings with a right to live, as even wanting to exist is something different from consciousness itself. I think something can be conscious and have no interest in its continued existence and that's okay)

lowsong|4 months ago

> I think we can all agree that LLMs can mimick consciousness to the point that it is hard for most people to discern them from humans.

No, their output can mimic language patterns.

> If they aren't, you need a really good argument that shows how they differ from humans or you can take the opposite route and question the consciousness of most humans.

The burden of proof is firmly on the side of proving they are conscious.

> I currently hold the opinion that they are conscious.

There is no question, at all, that the current models are not conscious, the question is “could this path of development lead to one that is”. If you are genuinely ascribing consciousness to them, then you are seeing faces in clouds.

estimator7292|4 months ago

I think it's like seeing shapes in clouds. Some people just fundamentally can't decouple how a thing looks from what it is. And not in that they literally believe chatgpt is a real sentient being, but deep down there's a subconscious bias. Babbling nonsense included, LLMs look intelligent, or very nearly so. The abrupt appearance of very sophisticated generative models in the public consciousness and the velocity with which they've improved is genuinely difficult to understand. It's incredibly easy to form the fallacious conclusion that these models can keep improving without bound.

The fact that LLMs are really not fit for AGI is a technical detail divorced from the feelings about LLMs. You have to be a pretty technical person to understand AI enough to know that. LLMs as AGI is what people are being sold. There's mass economic hysteria about LLMs, and rationality left the equation a long time ago.

anonzzzies|4 months ago

What we do have, for whatever reason (usually money related: either making money or getting more funding) many companies/people focused on making AI. It might take another winter (I believe it will unless we find a way to retrain the NNs on the fly instead of storing new knowledge in RAG: and many other things we currently don't have, but this would he a step) or not, people will keep pushing toward that goal.

I mean, we went from worthless chatbots which basically pattern matched to me waiting for a plane and seeing a fairly large amount of people charting to chatgpt, not insta, whatsapp etc. Or sitting in a plane next to a person who is using local ollama in cursor to code and brainstorm. This took us about 10 years to go from some ideas that no one but scientists could use to stuff everyone uses. And many people already find human enough. What in 100 years?