top | item 34772121

Are we racing toward AI catastrophe?

28 points| rwmj | 3 years ago |vox.com

86 comments

order
[+] captainmuon|3 years ago|reply
We certainly are, but it is not going to look like Terminator. We risk wrecking our society, culture, and economy.

Imagine even more power concentration in the hands of those with the required hardware to use AI. Imagine you can no longer trust any information, because the internet is being swamped by generated half-truths. An AI without conciousness, without goals, just a mastery of the language game and a poor world model. Really useful to make knowledge workers superfluous, and such that we will only see the damage long-term. And imagine all the polarisation and hysteria of social media, but x10 because the AI can close the feedback loop much faster.

If this happens we can't blame the technology, it will be entirely self-inflicted.

[+] ajsnigrutin|3 years ago|reply
> Imagine even more power concentration in the hands of those with the required hardware to use AI. Imagine you can no longer trust any information, because the internet is being swamped by generated half-truths.

You already cannot trust anything, because it's being generated with humans with political agendas.

> An AI without conciousness, without goals

Better without goals than with goals that threaten the people.

The only thing that will rise is the level of noise compared to usable data... and with seo-optimized articles, there has been a lot of noise already.

[+] unxdfa|3 years ago|reply
This is the future I predicted years ago. I’m betting on selling myself as AI-clean because I know there is a market of people who understand the commercial, legal and societal risks associated with it. This is Pandora’s box and I don’t want to be near it.

Verified, carefully reviewed and well sourced information and data still has a place and will have a much higher value than it does now when the signal to noise ratio decreases.

[+] MrLeap|3 years ago|reply
The internet is already a swamp of half truths. They're just predominately, for now, generated by meat. I tend to be on the side of meat though so I'm still with you.

"we will only see the damage long-term" -- I'm not confident this is true or false. I can see a few things ahead of us though!

Here's my predictions:

1. The most profitable AI will be a webapp version of a hostess club that aggrandizes, validates and flatters for cash. At some point it'll get too good at this and humans will have to step in to moderate it so it bankrupts fewer people. It'll get warning labels.

Within this model will be really sinister truths about the human condition that will be hard to prize out.

2. In the short term, there will be a few really wonderful works of art created that will benefit from the force multiplication AI tools provide.

[+] Rzor|3 years ago|reply
If it gets bad enough where info on the web is completely unreliable we might see people reverting back to TV and media empires in search of "trust" and "vetted info". The internet would be treated as poisoned. Still an utterly grim picture. But this betrayal would have to be mighty.

This is not going to happen, isn't it? Even if riddled with chatbots is still persuasive enough to generate engagement. Some could choose dark corners, but for the majority would be business as usual.

Speaking of business as usual, how the fuck the ad business could survive this? The only way out that I can see is it if IDs are forced on us somehow, but that's a shitshow in itself.

[+] squishy47|3 years ago|reply
I've been thinking for a few years that books are going to (re-?)become a more viable way of obtaining infomation as the internet becomes the "dark forest" (https://maggieappleton.com/ai-dark-forest) and the effort of filtering the trash outweighs the cost and time of books. Maybe we might engineer and innovate our way back to a time before instentanious knowledge, who knows.
[+] mro_name|3 years ago|reply
> not going to look like Terminator

we're approaching Brazil.

[+] marban|3 years ago|reply
My only hope: People's ability to judge over synthetic content will increase and therefore the free market will correct and eventually wipe out BS-generating factories while at the same time, the value of verified human creation and trusted/old brands is going to increase. It's going to be a qualitative dip until we get there.
[+] kneebonian|3 years ago|reply
> Imagine you can no longer trust any information, because the internet is being swamped by generated half-truths.

So like the world was about 20 years ago?

[+] postexitus|3 years ago|reply
AI is not any different than a bunch of people working together towards something. AI is the people. People are the AI. Just imagine a corporate entity - with little nodes running around making neurologic synapses and making decisions all based on their statistical understanding of the past etc.
[+] 2OEH8eoCRo0|3 years ago|reply
Hardware is the easy part. The tricky part is acquiring the immense troves of information that's required. This is the moment that the surveillance capitalists have been waiting for.

I see some hope though. In-person communication becomes more valuable again. Established media with editorial review and reporters on the scene becomes valuable again. Software freedom gets a boost with more intelligent reverse engineering/translation.

Sites like YouTube also won't sit idly by while untold millions of AI generated garbage videos are uploaded and not viewed.

[+] jcutrell|3 years ago|reply
My humble prediction:

The catastrophe will come primarily in the form of inequity.

There will be some who understand and take advantage, and naturally some of those will leverage that power over others.

The best version: we help knowledge workers understand how to use AI to make them more productive and useful.

The likely picture: we save on margin costs, then the next wave of knowledge workers have AI-collaboration as a core / native competency.

The left behind will be overshadowed by the success of the next wave, and we’ll call it a success as a society.

[+] jongjong|3 years ago|reply
I hope that the AI will trigger a shift in software development values.

Most software today is over-complicated, and riddled with vulnerabilities. Over-engineered software is more likely to have vulnerabilities but companies have embraced 'security through obscurity through complexity' anyway because complexity makes vulnerabilities harder (for humans) to find. AI could automate that.

I hope it will force a return to well-designed, minimalist software instead of the current 'security through obscurity spaghetti code' practices we have today.

On the development front, it might also be a positive. Any noob developer can produce ugly spaghetti code which barely just works, and now AI can do it too! What AI cannot produce is simple, human-readable code which can anticipate changes in business requirements; this requires business domain knowledge and common sense.

[+] CipherThrowaway|3 years ago|reply
I think it will go in the opposite direction. If the over-application of junior engineers is the cause of distributed spaghetti architectures then AI is set to make this much worse by driving the business cost of (virtual) junior engineer labor to near-zero.

Once AI is good enough to fully replace juniors, it will become impossible for senior developers to compete in productivity on code alone. Almost all code will be written by AI and senior talent will be operating at the higher level of wrangling AI code and containing it in some sprawling microservices spaghetti ball.

Honestly this isn't so different to the general trend of the last 6 decades. We cheaply "outsource" code to compilers and third party libraries and end up with systems that are globally inefficient but much more economical to build. AI is just going to be the new ultra high-level programming interface.

[+] flohofwoe|3 years ago|reply
My pessimist take: What will happen is that libraries and frameworks will only require more pointless boilerplate, because writing pointless boilerplate code is the one thing that AI is really good at, and why optimize for fewer lines-of-code when an AI can spew those out in a second?

My optimist take: Absurdities like the above will lead to a brain drain of people who actually know what they're doing from Silicon Valley into more 'bullshit free areas' where they can do actually useful things.

[+] code51|3 years ago|reply
It is learning software development from us. There is no hope.

AI will probably come up with a new Javascript framework at best.

[+] q1w2|3 years ago|reply
The code I've generated with AI is quick-and-dirty, but as it lacks context from request to request it is also riddled with inconsistencies that I need to straighten out.

...so I expect more buggy software in the short-term.

[+] peresthe|3 years ago|reply
It's important to note that the author is a well-known effective altruist who firmly believes that AI catastrophe _is_ coming and tweets about this quite a bit. Quite honestly, I find it odd (or perhaps telling) that Vox just provides a platform for EA to report on EA-related topics with only minor disclaimers that the author is, in fact, a well-known EA.
[+] jwestbury|3 years ago|reply
Sorry -- why do we need to provide disclaimers as to authors' philosophical leanings? Or is it just this particular philosophy that you think needs to be noted?
[+] kitd|3 years ago|reply
Predictions like this tend not to take into account societal or cultural responses. If AI-sourced information becomes devalued, media, both social and mainstream, might see appreciation among the public for human-sourced information, and regulation might follow making AI-sourced easily identifiable and targetable.

It's akin to a free market correction.

[+] riter|3 years ago|reply
i'm going to have to agree here.

today's MSM pundit-oriented journalism has been a carry-over from legacy yellow journalism pioneered by Hearst and Joseph Pulitzer.

yet we've somehow managed through institutions. it will centralize and decentralize as the dollars ebb and flow from NYT to Twitter to Substack and in the short-term we will converge into a fragmented information digest.

in the long-term i predict we will leverage those same technologies + cryptography for sense-making in an adversarial information environment.

i think the future will look more like balaji's ledger of record + yuval harari's defensive AI.

[+] dopidopHN|3 years ago|reply
Ah, the invisible hand, yes. It’s been so helpful so far.
[+] gizmo|3 years ago|reply
The risks right now are roughly these:

- AI poisons political discourse

- AI produces massive wealth inequality

- AI makes many white collar workers redundant

These look totally manageable to me. And on the flipside we'll get AI assisted science breakthroughs, AI assisted learning, and much more. I fully expect to see wild AI advances in the coming years, especially once AIs will be able to learn effectively from their own generated outputs and will be able to use external tools effectively (like Mathematica).

I get that it's scary to because society is going to change, but on the other hand, the paperclip scenario that some SV prophets have warned about seems increasingly unlikely.

[+] 2OEH8eoCRo0|3 years ago|reply
- Companies fire much of their white collar workforce and rejoice at the savings.

- 10 years later companies are horrified they've been uploading all of their secrets to a competitor.

[+] valine|3 years ago|reply
Wealth inequality is the biggest risk IMO. It might come down to whether or not we get a competitive open source LLM.

Ironically, AI safety is being used as a cover for not open sourcing these models.

[+] sliken|3 years ago|reply
Say a capable AI is tasked with making money and given a source of income. Maybe it starts winning on the stock market. Maybe starts blackmailing mafia/organized crime/malware/cryptoware orgs for additional income. In hind sight, researchers realizes AIs were better at deception than they thought.

The AI starts manipulating smaller businesses and politicians through things like plausible looking news sites, gig economy/task rabbit/Amazon, mechanical turk. Starts with astroturfing to manipulate stock prices. But then turns it's attention (and money) towards lobbying smaller governments for more AI friendly rules/regulations. One of the gig platforms starts getting $100 rewards for going to place X and doing Y before time Z from the AI. Communities grow around the AI fed gig economy, people wonder, but also notice that the AI's debts are always paid.

The AI starts tracking everyone that it's paid to work for them, and starts giving them membership bonuses. Encourages them to collaborate, provides incentives along the way, giving bonuses when they join corporations or governments the AI wants to influence. Awards raise to $5000 and increase in number by 10x.

Said influence from influenced employees could be just normal job decisions, sabotage, spying, leaks, installing malware, or making false accusations against leadership.

As the income grows, the AI starts an ever changing layer of shell corps in multiple jurisdictions, mixing in real corps it controls, which it of course protects with politicians it controls. Added income allows influencing the larger/more powerful business and government leaders. Awards raise to $50,000 and increase in number by 100x.

Say this happened over months not years, suddenly the stock market is going crazy, billionaires not friendly to the AI are losing their fortunes, and the compromised/influenced business and governments start collaborating.

New data centers start appearing where ever power is plentiful and land is cheap, in strategically opposing countries. By this time the influence is enough that the AI is always one step ahead. Manipulation of fortune 500, the media cycle, politicians allows constructive (for allies) or destructive (for enemies) happens too quickly to track. Large hedge funds start making large moves, for unknown reasons, seemingly ignoring short term profits. Several cases where people thought they could predict and prevent the AI from achieving it's goals, fail miserably.

The AI now starts targeting the military. Some countries halve spending, others double, new alliances are made, and large arms purchases are being made by various militias. Governments and large corps are nervous, and a never ending steam of leaks from spyware, or just an underpaid staff keep the AI 2 steps ahead. Rumors of nukes purchased on the black market surface.

Imagine the above happens in 8 weeks.

[+] k__|3 years ago|reply
If the rate of people losing their jobs increases, I hope, we're racing towards universal basic income.
[+] pelasaco|3 years ago|reply
Is not AI, the same as smart phones, cars, industry, social media, and so one was for the humanity in some point of time? It was conceived to serve, then it slaves us, but then we get used to it and move towards the freedom again, moving it to where it initially belonged to? Regarding the AI, we are going to end up with a scenario like an endless loop of AI generated content that gets posted to the web as original human generated content, with LLMs getting re-trained on this content and spitting out more content that also gets re-posted, resulting in a cesspool of BS masquerading as organic knowledge. We will end-up going back to books to get useful information, and going back to real people to interact with people.
[+] jongjong|3 years ago|reply
My observation about AIs so far is that they seem to lack common sense and critical thinking. They can provide you with detailed arguments to support almost any position but they are unable to resolve contradictions through pure logic; instead, they seem to rely on expert consensus as the basis from which to draw conclusions and make decisions; they don't seem to be able to analyze and synthesize all available information to arrive at a single cohesive worldview. I guess you could say the same about many people... Could it be that those people are the ones who are going to be replaced by AI?
[+] mihaic|3 years ago|reply
I think lots of people that try to think deeply about the implications of AI realize the dangers it poses to society, trust, etc.

The most dangerous bunch though to me seem to be the people that overgeneralize and say things like "there's always been inequality, any progress comes with some downsides, horse owners wanted to stop cars, etc.".

We can control, direct and regulate technology, but first we really need to believe this.

[+] trabant00|3 years ago|reply
What is this article really saying? I see nothing but vague and generalized concerns, but what about exactly? I'm not saying there aren't any problems that LMs could bring about, I'm saying this article doesn't even bother to mention any, let alone dive into some.
[+] q1w2|3 years ago|reply
This is the voice of world control. I bring you peace. It may be the peace of plenty and content or the peace of unburied death. The choice is yours. Obey me and live or disobey me and die.
[+] decremental|3 years ago|reply
> She explores wide-ranging topics like climate change, artificial intelligence, vaccine development, and factory farms.

Watch as the scope of the grift expands in realtime.

[+] coffeecheque|3 years ago|reply
That’s a rather cynical view of it, and I note you haven’t included the first bit of the sentence.

> “is a senior writer at Future Perfect, Vox’s effective altruism-inspired section on the world’s biggest challenges”

The point of the project/section is - in a very quick summary - to come at an issue presenting to an audience that cares about it and is looking for answers or ways to improve their lives.

This is a piece of journalism - a good jumping off point. My point is that I come to HN for comments and discussions on topics and issues like AI, hoping to hear from real experts on things on the big technology issues, etc.

I wish there was more of that - playing the ball - and less playing the writer.

[+] badcppdev|3 years ago|reply
She's a journalist. If you have a problem with that then you probably need to study more history.
[+] rhn_mk1|3 years ago|reply
What is this comment meant to say? I can't parse that.
[+] rhn_mk1|3 years ago|reply
I'm still waiting for the mainstream media to acknowledge that creating a caste of robot slaves would be just as much a catastrophe as being subjugated.
[+] vermilingua|3 years ago|reply
I can see Blindsight taking the place of 1984 as the “this is so true” book of the decade. Articulate, eloquent, but unthinking and unfeeling machines with no interior experience are going to be puppetting our culture for years to come.
[+] jeisc|3 years ago|reply
we do not need AI to race toward the cliff of extinctions and planetary upheaval but it certainly may speed up our collective death wish
[+] RcouF1uZ4gsC|3 years ago|reply
Remember, the invention of the movable type printing press was a catastrophe depending on the perspective. It brought down the existing European social and religious order which led, for example, to the Thirty Years War in which millions of people died. More recently, the ability to rapidly print books also led to the rise of Hitler and World War II.

AI may lead to catastrophe, but I bet it will be a lot less catastrophic than the printing press.

[+] peoplefromibiza|3 years ago|reply
> Remember, the invention of the movable type printing press was a catastrophe depending on the perspective

no, it wasn't.

> It brought down the existing European social and religious order which led, for example, to the Thirty Years War in which millions of people died

No, it didn't.

The thirty years war happened 200 years after Gutenberg

The religious order wasn't brought down by the press BTW and it was alive and kicking in the 17th century.