top | item 35354181

AI Labs Urged to Pump the Brakes in Open Letter

32 points| ktamura | 2 years ago |time.com

41 comments

order

midland_trucker|2 years ago

I find it really hard to see how productive a collective pause and 'think' about something so inherently unpredictable will be.

> "implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts"

Who deserves to be called an expert on this? Feels like Economics or something, where you have camps of thought advocating for themselves but little way of knowing who's right. Best to break things and develop antibodies whilst the stakes are still low.

vlaaad|2 years ago

So Musk was outed from OpenAI and is now salty? Or are they trying to build a competitor and are falling behind and want OpenAI to get a break so they can catch up? In either way, the ethics talk is total bullshit.

wseqyrku|2 years ago

Was thinking the same thing. Just a few days ago Elon was criticizing ChatGPT to be too "woke" which basically means it's careful to not push any hot buttons and that it needs to be less politically correct all the time.

It's funny because the other day I asked ChatGPT that the only thing that holds back AI tech. is regulatory issues. Of course it denied cuz it didn't know any better, but when GPT-4 paper came out, this was explicitly mentioned, that they were solely working on safety issues for months before release.

wsgeorge|2 years ago

> So Musk was outed from OpenAI and is now salty?

Outed? AFAIK he quit.

reset-password|2 years ago

I felt the same way when I really wanted a super soaker 2000 and then the neighbor kid got one before me. "MOOOOM!!!"

TheLoafOfBread|2 years ago

This is exactly my feeling from the whole letter. I don't understand why are people scared of ChatGPT and similar, when it is just better IntelliSense

andrewstuart|2 years ago

People are really freaking out about AI aren’t they?

Why bother? It’s moving super fast, just wait and see what happens.

And even if you could control or regulate it, exactly how would you do that? What would you be regulating/controlling? How would you define it?

And why would you want to anyway? The party has just started, if you think the revolution has arrived, your completely wrong - this is just the beginning - the most amazing stuff is yet to come.

These people begging for the pace to slow, it’s analogous to the newspapers and music companies wanting the internet to slow down as they were being rapidly involuntarily made redundant.

glenneroo|2 years ago

Did you read the article or letter? They list a lot of very valid reasons. I won't bother quoting because I would just be copy/pasting the article (which is very short).

For a more detailed list of reasons, go read what AI alignment scientists think, since they have been working on how to align an AI in order to not "turn everything into paperclips" since the 1970s and it seems as though many are rather skeptical about us having any future if we keep up at this rate (spoiler: many believe the end of humanity will occur soon after the creation of any superintelligence): https://www.alignmentforum.org

My personal takeaway: continued development might mean the end of humans and maybe even our planet (if an ASI can deploy nanotech to convert everything into substrate). As it already stands, nobody knows why LLMs work as well as they do - they are already a black box. Sure, plenty of people can explain the math behind each of steps involved: training, matrices, transformers, inference, et al. but it's still a big black box which spits out "magic" answers. You can't just drop a breakpoint inside a model during inference to see what's going on, you will just get a long list of unintelligible floating point numbers at any step of the process.

Your question about regulation is valid... but something needs to be done. I feel like we're standing very close to one of the Big Filters.

Comparing AI/AGI/ASI to anything we have currently seen is probably pointless, they are worlds apart. Would you bother comparing a smartphone to a book? The rate of AI/ML progress is sufficient to measure in hours instead of months or years.

kromem|2 years ago

Also, the letter only presents threats.

Part of the ethical consideration needs to be the opportunity cost a six month delay could cause.

As an example, let's say that GPT-N will cure cancer.

Over a six month period that's at least 5 million people dead if the date it arrives and cures cancers broadly was pushed by that long.

What about negotiating foreign policy treaties to prevent war, or identifying a way to reverse climate changes, or any number of other positive effects?

The fact the letter even positions 'should' such an advanced AI exist as a legitimate question I find pretty gross.

Should we hold back the progress of intelligent life in the universe out of the ego of humanity?

I get superintelligence is an unsettling idea.

But inherent to that name is an indication that there's a seat at the debate table that's currently vacant which might have important and interesting things to say on the subject, and anyone suggesting aborting it is a valid course of action (particularly given the myriad of existential threats we already face from fellow humans) I can only regard as being quite far from super.

Peritract|2 years ago

> It’s moving super fast, just wait and see what happens.

This seems like bad advice in almost any context.

csomar|2 years ago

> People are really freaking out about AI aren’t they?

AI touches the domain of software development the most (since we have put a lot of data about it on the Internet). It touches other things too like writing, and design. It doesn't touch things like food delivery, construction, or a farmer.

Currently, they happen to be at the lower tier of society. For some reason. Despite the fact that you can't go two days at a row without food. AI can flip this. There is no need for this army of developers, designers, marketers and bureaucrats. Some people are afraid.

Tl;dr: The people who are freaked out about AI are the people who are bound to lose the most by it.

fooker|2 years ago

If this sort of prevention didn't work for nuclear weapons it'll not work for anything ever.

mechagodzilla|2 years ago

This basically worked perfectly with nuclear weapons. Everyone took their development extremely seriously, and we’ve managed to avoid a nuclear apocalypse. If anyone could get on their computer and buy a nuke from Amazon, we would all be dead within the week.

RugnirViking|2 years ago

we did at least relatively okay with nuclear weapons? I notice that there are a whole lot less of them now than there were, nobody has used any, and several countries under extreme pressure still haven't developed them (taiwan, japan, south korea, armenia, venezuela, syria)

Kinrany|2 years ago

The letter: https://futureoflife.org/open-letter/pause-giant-ai-experime...

I don't see any way to verify the signatures. Though the mention of Sam Harris' signature disappearing suggests they're being moderated at least?

dusted|2 years ago

This, indeed, I'm also wondering why we'd learn about it only through that site, wouldn't the people signing be posting about it too ?

sj8822|2 years ago

The posters here seem to be highly skeptical of the need to regulate emerging AI.

I find that pretty disappointing and surprising.

Recently, they gave gpt-4 access to a terminal, the internet, and money. And gpt-4 itself is a software (and software in general has bugs, vulnerabilities, etc) black box that is incredibly, unprecedentedly powerful and not fully understood. Part of its training data is almost every known security vulnerability.

You guys really don’t see any potential problems with this? I mean really? Get a little creative here.

windex|2 years ago

Reminds me that fire is dangerous and making it would have been patent-able today. There is no guarantee that once taken away from the masses, AI research wont continue in walled gardens with only the rich, connected, and powerful having access. Had people been egalitarian or safety oriented, a lot of other things wouldn't have been developed, like nukes, robotics& automation, or self driving cars for example. There wouldn't have been billionaires setting up rules for everyone else.

The only reason this is an issue for "them" is that this tech isn't under their complete control and it seems to threaten the rent extraction model in a bunch of other industries.

mindcrime|2 years ago

windex|2 years ago

My bets are that they want to catch up.

kromem|2 years ago

Exactly.

And while I don't love OpenAI becoming increasingly closed, they really have done an excellent job with alignment that I very much doubt some of the signers of the letter would have.

So I'd much rather have the progress in AI continue to be led by a company pursuing caution than by another company that used those 6 months to catch up and take the lead with even less caution.

This is very much "out of the frying pan and into the fire" territory targeted at trying to stall OpenAI for competitive reasons and getting every outspoken person against AI for years in academia to jump onto signing it.

weekendflavour|2 years ago

The accelerationist dream is finally becoming a reality and these nerds wanna stop it. Deal with it

RcouF1uZ4gsC|2 years ago

Speaking of AI safety:

With Tesla “Autopilot” Elon Musk is responsible for releasing AI that has actually killed multiple people.

I don’t think ChatGPT has killed anyone yet.

DoctorOetker|2 years ago

Ultimately its a millennia old fight between manipulators and rationalism.

The bead maze toy versus the abbacus.

Twisted contorted reasoning versus formal verification.

Choose your abbacus.

The black box nature of machine learning models is not the issue. Instead of training to imitate vacuous conjectures and claims as humans on average typically do, they could be trained to do automated theorem proving, AlphaZero style.

A minimalistic verifier like metamath is available free for download, including set.mm and a freely available book. It would be hard to purge from civilization.

Currently its math database is collaboratively worked on at github.

In theory a blockchain could host it.

Fermat style challenging could be used to objectively assess the value of theorems: the longer it matures unproven as a challenge on the chain, the higher the reward if someone finally proves it.

This inevitably creates an incentive to enter and digitize known mathematics into machine readable form, which will be easy for machine learning to accomplish.

Machine learning empowered automated theorem proving will become a profitable business, with the fruits available for all to benefit from.

Well, cryptography and protocols will also appear.

So during training the machine learning models will get endless bedtime stories about Alice, Bob and Eve.

Using conventional forward or backward chaining combined with adversarial models, one can construct arbitrary provable theorems, and negate it, then hide or propagate the negation so its not simply the first symbol in the theorem.

So we can train models to challenge each other Fermat style, about the truth or falsehood of a statement, and demand proof.

We can thus construct artificial mathematical systems with known inconsistencies and train models to seek a proof that the system contains an inconsistency. Such a proof will depend on the conflicting axioms.

Hence the models will be our best tool to detect and resolve hypocrisy.

The literal meaning of "apocalypse" is "revelation" or "uncovering", not "big tragedy"...

The verification algorithm, for example the ~300 LoC python implementation by Raph Levien, owes most of its length to parsing the metamath file format.

The actual Kolmogorov complexity of the verification algorithm itself is much smaller. There won't be any bits to "align".

All these hopeless attempts at trying to align the intuition component of the machine learning model, instead of training it to gain intuition in producing logical derivations.

The real horror of the control freaks is not that their alignment mechanisms might fail, but that its impossible to bias the verification algorithm itself, that its impossible to perpetuate the conflicts of interest, that any additional code in the verifier is immediately suspect, especially if it obviously skips all checks and dogmatically accepts a statement if its signed by a hardcoded "right" key.

The objective judge will be mechanized.

"abaccus akbar!"

hermannj314|2 years ago

Elon already made his money selling a promise of FSD but now wants to pump the brakes on AGI because it is almost here and he doesn't own it?

Where did all the hyper-competitive SV libertarians disappear to in the last year?

maxdoop|2 years ago

Elon has been pretty vocal about the potential dangers of AI for a while now. And this open letter isn’t just him, it’s from other folks like Stuart Russel as well.

rvz|2 years ago

> As of Tuesday, no O̶p̶e̶n̶AI.com employees had signed the letter, although CEO Sam Altman’s name briefly appeared then disappeared from the list of signatories.

You already know the intention(s) of Sam Altman and O̶p̶e̶n̶AI.com. It was only to run with VC money and close up all their research up.

They are no better than DeepMind.

nullsense|2 years ago

>You already know the intention(s) of Sam Altman and O̶p̶e̶n̶AI.com. It was only to run with VC money and close up all their research up.

Citation needed.