top | item 38646404

(no title)

papito | 2 years ago

I am not sure autonomous driving will arrive in the next 10 years, let alone AGI.

These people are masterful con artists, throwing out buzzwords to hoover up billions of gullible VC dollars (a single tear rolls down my eye).

How is that Web 3.0 going? Did we build Richard Hendricks's new Internet yet - sometimes I think they take ideas from the HBO comedy just to f*k with us.

Granted, LLMs are a much more useful and substantial tech than Blockchain, it's an insult to [biological] intelligence to suggest that if we somehow train these things on JUST a bit more data, they will go sentient and murder us all.

The AI image generators still don't know that humans have five fingers. I think it's a long way to super-intelligence, friends.

discuss

order

Jerrrry|2 years ago

>I am not sure autonomous driving will arrive in the next 10 years, let alone AGI.

autonomous driving doesn't use ML and AI like most people think - not to an affective physical degree.

>These people are masterful con artists, throwing out buzzwords to hoover up billions of gullible VC dollars (a single tear rolls down my eye).

OpenAI wrapperware is just as worthless as bitcoin and eutheruem clones, yes.

>Granted, LLMs are a much more useful and substantial tech than Blockchain,

yup...and unrelated, but go on...

>it's an insult to [biological] intelligence to suggest that if we somehow train these things on JUST a bit more data, they will go sentient and murder us all.

ah, your point.

Well, it is with utmost hubris to remind that something 2% smarter than you will quickly eat your lunch on any timescale that matters - and it works on a logarithmic scale, not an evolutionary one. That's 10 magnitudes.

If we can think of it, it has already conjured it; by definition. Eventually, roll enough dice, you'll get snake eyes 10e5 times in a row. We ourselves are proof to that. All it needs is any shred of resource competition, or immediate recall/context/action loops that could mimic artificial proto-consciousness, and you eventually inevitably probably will encounter arisen emergent behaviors that favor self-existence.

It won't be immediately obvious; after all, we are aware of the problem, which will be a selective sieve of sorts.

But much like weird, collaborative protein pools, given enough time / chances (how many *FLOPS are these things pushing?), will eventually result in us, these can eventually compose more than the sum of their parts; just like we do.

Consciousness (awareness of self, ability of abstract thought, manipulation of environment, memory/recall/familiarity mechanisms) may just be an emergent behavior once you randomly happen to evolve 10e15 neurons, or whatever your number is. Since that appears to be the case (with animals), it's naive to think that a more efficient substrate or algorithm with less evolutionary baggage couldn't easily dominate or at least compete in any time-frame aside from an individual's.

Once they have proto-consciousness, it's only natural to be selfish first, altruistic later.

By the time we even know anythings wrong, it was way too late.

Pam Beesley : "There is a master key and a spare key for the office. Dwight has them both. When I asked, "What if you die, Dwight? How will we get into the office?"...

.he said, "If I'm dead, you guys have been dead for weeks."

>The AI image generators still don't know that humans have five fingers. I think it's a long way to super-intelligence, friends.

Wait til transformers allow GAN's to "zoom out" more effectively....done. Hands, and nearly most first generation artifacts, are a solved problem now.