Despite the hyperbolic title, the article outlines pretty clearly the outcomes of the OpenAI debacle and where things are headed. "Mission-oriented" AI research is dead; it's just another tech race by the big boys now.
Right now, at worse, AI will have a similar impact as social media. It's definitely worth thinking about the potential harm it could cause to the different industries, but thinking it's an existential risk is still premature. We don't have any cognitive architecture that will reach self-awareness and become an unstoppable bad actor. This entire ordeal more or less proved that people are a little too paranoid.
We don't have a dangerous AI yet, but because we are talking about an existential risk, yes we do want to start thinking about it now.
AI does not have to be alive or truly conscious or anything to be dangerous. It just needs to be very effective at problem solving and have someone take the guardrails off and tell it to act in a self-interested way. Especially without humans in the loop.
It seems likely that the systems will continue to get much faster and more robust in terms of problem solving. It is easy to anticipate the possibility within just a few years of agent swarms being connected to pretty much everything (as directed by their human owners), and no human being able to compete. This will be a precarious position.
All the top brass domain experts of these systems, like 95% besides Yann LeCunn, believe it's an existential risk. Nothing has "proved that people are a little too paranoid." Why people listen to HN prognosticators over Hinton, Bengio, Sutskever, Altman, Amodei, Hassabis, etc. is beyond me.
> but thinking it's an existential risk is still premature
Even ignoring what the technology can do, it feels me with dread that we basically have IMHO some of the worst possible people leading the charge - people who are greedy, slimy, and ruthless; who only care about money. Such a bad timeline we are in.
I'm not an AI doomer myself but I want to do my best to understand (and steelman) the argument of those who are concerned about existential risks.
If you really think that the job of programmers is at risk because the current wave of AI can write any program, that also means that AI can now write a new generation of AI systems that is potentially much more powerful than the previous generation and presumably this iteration process can proceed N times until the end result is sufficiently incomprehensible for humans.
Eventually such systems will work very well and be very useful and we'll rely on it more and more.
At that point the people worried about existential risk will point out that when a vastly superior intelligence is tasked to love and protect us, a lower intelligence, it may do things for 'our own good' that we may find appealing.
zerohalo|2 years ago
baking|2 years ago
proc0|2 years ago
ilaksh|2 years ago
AI does not have to be alive or truly conscious or anything to be dangerous. It just needs to be very effective at problem solving and have someone take the guardrails off and tell it to act in a self-interested way. Especially without humans in the loop.
It seems likely that the systems will continue to get much faster and more robust in terms of problem solving. It is easy to anticipate the possibility within just a few years of agent swarms being connected to pretty much everything (as directed by their human owners), and no human being able to compete. This will be a precarious position.
reducesuffering|2 years ago
TerrifiedMouse|2 years ago
Even ignoring what the technology can do, it feels me with dread that we basically have IMHO some of the worst possible people leading the charge - people who are greedy, slimy, and ruthless; who only care about money. Such a bad timeline we are in.
barrysteve|2 years ago
Every program written is vulnerable of being generalized into an LLM pattern.
The outputs of AI and the predictive abilities, are not scary.
The input capacity of AI will eventually eat every programmer's lunch and their programs too.
The existential risk is mapping out computer programming as an entire field and locking it away.
ithkuil|2 years ago
If you really think that the job of programmers is at risk because the current wave of AI can write any program, that also means that AI can now write a new generation of AI systems that is potentially much more powerful than the previous generation and presumably this iteration process can proceed N times until the end result is sufficiently incomprehensible for humans.
Eventually such systems will work very well and be very useful and we'll rely on it more and more.
At that point the people worried about existential risk will point out that when a vastly superior intelligence is tasked to love and protect us, a lower intelligence, it may do things for 'our own good' that we may find appealing.
I'm just about to neuter my cat today
gumballindie|2 years ago