top | item 38387214

Sam Altman's Second Coming Sparks New Fears of the AI Apocalypse

31 points| zerohalo | 2 years ago |wired.com

23 comments

order

zerohalo|2 years ago

Despite the hyperbolic title, the article outlines pretty clearly the outcomes of the OpenAI debacle and where things are headed. "Mission-oriented" AI research is dead; it's just another tech race by the big boys now.

baking|2 years ago

I would have appreciated more on the challenges facing the new OpenAI board. The quote at the end from Satya was a bit of a shocker.

proc0|2 years ago

Right now, at worse, AI will have a similar impact as social media. It's definitely worth thinking about the potential harm it could cause to the different industries, but thinking it's an existential risk is still premature. We don't have any cognitive architecture that will reach self-awareness and become an unstoppable bad actor. This entire ordeal more or less proved that people are a little too paranoid.

ilaksh|2 years ago

We don't have a dangerous AI yet, but because we are talking about an existential risk, yes we do want to start thinking about it now.

AI does not have to be alive or truly conscious or anything to be dangerous. It just needs to be very effective at problem solving and have someone take the guardrails off and tell it to act in a self-interested way. Especially without humans in the loop.

It seems likely that the systems will continue to get much faster and more robust in terms of problem solving. It is easy to anticipate the possibility within just a few years of agent swarms being connected to pretty much everything (as directed by their human owners), and no human being able to compete. This will be a precarious position.

reducesuffering|2 years ago

All the top brass domain experts of these systems, like 95% besides Yann LeCunn, believe it's an existential risk. Nothing has "proved that people are a little too paranoid." Why people listen to HN prognosticators over Hinton, Bengio, Sutskever, Altman, Amodei, Hassabis, etc. is beyond me.

TerrifiedMouse|2 years ago

> but thinking it's an existential risk is still premature

Even ignoring what the technology can do, it feels me with dread that we basically have IMHO some of the worst possible people leading the charge - people who are greedy, slimy, and ruthless; who only care about money. Such a bad timeline we are in.

barrysteve|2 years ago

LLMs in theory can absorb a stream of integers into a pattern of weighted nodes.

Every program written is vulnerable of being generalized into an LLM pattern.

The outputs of AI and the predictive abilities, are not scary.

The input capacity of AI will eventually eat every programmer's lunch and their programs too.

The existential risk is mapping out computer programming as an entire field and locking it away.

ithkuil|2 years ago

I'm not an AI doomer myself but I want to do my best to understand (and steelman) the argument of those who are concerned about existential risks.

If you really think that the job of programmers is at risk because the current wave of AI can write any program, that also means that AI can now write a new generation of AI systems that is potentially much more powerful than the previous generation and presumably this iteration process can proceed N times until the end result is sufficiently incomprehensible for humans.

Eventually such systems will work very well and be very useful and we'll rely on it more and more.

At that point the people worried about existential risk will point out that when a vastly superior intelligence is tasked to love and protect us, a lower intelligence, it may do things for 'our own good' that we may find appealing.

I'm just about to neuter my cat today

gumballindie|2 years ago

Explain why y’all obsessed with spreading fear and why y’all culting a dubious dude such as altman?