top | item 46844612

(no title)

snuxoll | 29 days ago

Anyone who anthropomorphizes LLM's except for convenience (because I get tired of repeating 'Junie' or 'Claude' in a conversation I will use female and male pronouns for them, respectively) is a fool. Anyone who things AGI is going to emerge from them in their current state, equally so.

We can go ahead and have arguments and discussions on the nature of consciousness all day long, but the design of these transformer models does not lend themselves to being 'intelligent' or self-aware. You give them context, they fill in their response, and their execution ceases - there's a very large gap in complexity between these models and actual intelligence or 'life' in any sense, and it's not in the raw amount of compute.

If none of the training data for these models contained works of philosophers; pop culture references around works like Terminator, 'I, Robot', etc; texts from human psychologists; etc., you would not see these existential posts on moltbook. Even 'thinking' models do not have the ability to truly reason, we're just encouraging them to spend tokens pretending to think critically about a problem to increase data in the recent context to improve prediction accuracy.

I'll be quaking in my boots about a potential singularity when these models have an architecture that's not a glorified next-word predictor. Until then, everybody needs to chill the hell out.

discuss

order

shmeeed|28 days ago

>Anyone who anthropomorphizes LLM's except for convenience [...] is a fool.

I'm with you. Sadly, Scott seems to have become a true AI Believer, and I'm getting increasingly disappointed by the kinds of reasoning he comes up with.

Although, now that I think of it, I guess the turning point for me wasn't even the AI stuff, but his (IMO) abysmally lopsided treatment of the Fatma Sun Miracle.

I used to be kinda impressed by the Rationalists. Not so much anymore.

tasuki|29 days ago

> Even 'thinking' models do not have the ability to truly reason

Do you have the ability to truly reason? What does it mean exactly? How does what you're doing differ from what the LLMs are doing? All your output here is just a word after word after word...

netsharc|29 days ago

As grandparent wrote:

> We can go ahead and have arguments and discussions on the nature of consciousness all day long

I think s/he needs to change the "We" to "You".

snuxoll|28 days ago

The problem of other minds is real, which is why I specifically separated philosophical debate from the technological one. Even if we met each other in person, for all I know, I could in fact be the only intelligent being in the universe and everyone else is effectively a bunch of NPCs.

At the end of the day, the underlying architecture of LLMs does not have any capacity for abstract reasoning, they have no goals or intentions of their own, and most importantly their ability to generate something truly new or novel that isn't directly derived from their training data is limited at best. They're glorified next-word predictors, nothing more than that. This is why I said anthropomorphizing them is something only fools would do.

Nobody is going to sit here and try to argue that an earthworm is sapient, at least not without being a deliberate troll. I'd argue, and many would agree, that LLMs lack even that level of sentience.

yread|29 days ago

When I ask an LLM to plan a trip to Italy and it finishes with with "oh and btw i figured the problem you had last week with the thin plate splines yoi have to do this ...."

samusiam|28 days ago

> Anyone who things AGI is going to emerge from them in their current state, equally so.

If you ask me, anyone who presumes to know where the current architecture of LLMs will hit a wall is a fool.