(no title)
snuxoll | 29 days ago
We can go ahead and have arguments and discussions on the nature of consciousness all day long, but the design of these transformer models does not lend themselves to being 'intelligent' or self-aware. You give them context, they fill in their response, and their execution ceases - there's a very large gap in complexity between these models and actual intelligence or 'life' in any sense, and it's not in the raw amount of compute.
If none of the training data for these models contained works of philosophers; pop culture references around works like Terminator, 'I, Robot', etc; texts from human psychologists; etc., you would not see these existential posts on moltbook. Even 'thinking' models do not have the ability to truly reason, we're just encouraging them to spend tokens pretending to think critically about a problem to increase data in the recent context to improve prediction accuracy.
I'll be quaking in my boots about a potential singularity when these models have an architecture that's not a glorified next-word predictor. Until then, everybody needs to chill the hell out.
shmeeed|28 days ago
I'm with you. Sadly, Scott seems to have become a true AI Believer, and I'm getting increasingly disappointed by the kinds of reasoning he comes up with.
Although, now that I think of it, I guess the turning point for me wasn't even the AI stuff, but his (IMO) abysmally lopsided treatment of the Fatma Sun Miracle.
I used to be kinda impressed by the Rationalists. Not so much anymore.
tasuki|29 days ago
Do you have the ability to truly reason? What does it mean exactly? How does what you're doing differ from what the LLMs are doing? All your output here is just a word after word after word...
netsharc|29 days ago
> We can go ahead and have arguments and discussions on the nature of consciousness all day long
I think s/he needs to change the "We" to "You".
snuxoll|28 days ago
At the end of the day, the underlying architecture of LLMs does not have any capacity for abstract reasoning, they have no goals or intentions of their own, and most importantly their ability to generate something truly new or novel that isn't directly derived from their training data is limited at best. They're glorified next-word predictors, nothing more than that. This is why I said anthropomorphizing them is something only fools would do.
Nobody is going to sit here and try to argue that an earthworm is sapient, at least not without being a deliberate troll. I'd argue, and many would agree, that LLMs lack even that level of sentience.
unknown|29 days ago
[deleted]
yread|29 days ago
samusiam|28 days ago
If you ask me, anyone who presumes to know where the current architecture of LLMs will hit a wall is a fool.