top | item 37676934

(no title)

simias | 2 years ago

>the AI safety ninnies

I am one of these ninnies I guess, but isn't it rational to be a bit worried about this? When we see the deep effects that social networks have had on society (both good and bad) isn't it reasonable to feel a bit dizzy when considering the effect that such an invention will have?

Or maybe your point is just that it's going to happen regardless of whether people want it or not, in which case I think I agree, but it doesn't mean that we shouldn't think about it...

discuss

order

rafaelmn|2 years ago

I think computer scientist/programmers (and other intellectuals dealing with ideas only) strongly overvalue access to knowledge.

I'm almost certain that I can give you components and instructions on how to build a nuclear bomb and the most likely thing that would happen is you'd die of radiation poisoning.

Most people have trouble assembling ikea furniture, giving them a halucination prone LLM they are more likely to mustard gas themselves than synthesize LSD.

People with necessary skills can probably get access to information in other ways - I doubt LLM would be an enabler here.

barrysteve|2 years ago

A teenager named David Hahn attempted just that and nearly gave radioactive poisoining to the whole neighbourhood.

0xDEAFBEAD|2 years ago

>I'm almost certain that I can give you components and instructions on how to build a nuclear bomb and the most likely thing that would happen is you'd die of radiation poisoning.

An LLM doesn't just provide instructions -- you can ask it for clarification as you're working. (E.g. "I'm on step 4 and I ran into problem X, what should I do?")

This isn't black and white. Perhaps given a Wikihow-type article on how to build a bomb, 10% succeed and 90% die of radiation poisoning. And with the help of an LLM, 20% succeed and 80% die of radiation poisoning. Thus the success rate has increased by a factor of 2.

We're very lucky that terrorists are not typically the brightest bulbs in the box. LLMs could change that.

EGreg|2 years ago

The Anarchist Cookbook - anyone have a link?

THE ISSUE ISNT ACCESS TO KNOWLEDGE! And alignment isn’t the main issue.

The main issue is SWARMS OF BOTS running permissionlessly wreaking havoc at scale. Being superhuman at ~30 different things all the time. Not that they’re saying a racist thought.

esafak|2 years ago

No, we don't. Knowledge is power. Lack of it causes misery and empires to fall.

croes|2 years ago

The problem of AI won't be forbidden knowledge but mass misinformation.

notatoad|2 years ago

i think it's perfectly reasonable to be worried about AI safety, but silly to claim that the thing that will make AIs 'safe' is censoring information that is already publicly available, or content somebody declares obscene. An AI that can't write dirty words is still unsafe.

surely there's more creative and insidious ways that AI can disrupt society than by showing somebody a guide to making a bomb that they can already find on google. blocking that is security theatre on the same level as taking away your nail clippers before you board an airplane.

simias|2 years ago

That's a bit of a strawman though, no? I'm definitely not worried about AI being used to write erotica or researching drugs, more about the societal effects. Knowledge is more available than ever but we also see echo chambers develop online and people effectively becoming less informed by being online and only getting fed their own biases over and over again.

I feel like AI can amplify this issue tremendously. That's my main concern really, not people making pipe bombs or writing rape fanfiction.

RockRobotRock|2 years ago

As long as OpenAI gets paid, they don't care if companies flood the internet with low quality drivel, make customer service hell, or just in general make our lives more frustrating. But god forbid an individual takes full advantage of what GPT4 has to offer

downWidOutaFite|2 years ago

That is not what the "AI safety ninnies" are worried about. The "AI safety ninnies" aren't all corporate lobbyists with ulterior motives.

mitchitized|2 years ago

> taking away your nail clippers before you board an airplane.

TRIGGERED

anonyfox|2 years ago

I am in the strictly "not worried" camp, on the edge of "c'mon, stop wasting time on this". Sure there might be some uproar if AI can paint a picture of mohammed, but these moral double standards need to be dealt with anyways at some point.

I am not willing to sacrifice even 1% of capabilities of the model for sugarcoating sensibilities, and currently it seems that GPT4 is more and more disabled because of the moderation attempts... so I basically _have to_ jump ship once a competitor has a similar base model that is not censored.

Even the bare goal of "moderating it" is wasted time, someone else (tm) will ignore these attempts and just do it properly without holding back.

People have been motivated by their last president to drink bleach and died - just accept that there are those kind of people and move on for the rest of us. We need every bit of help we can get to solve real world problems.

jona-f|2 years ago

I am thoroughly on your side and I hope this opinion get more traction. Humans will get obsolete though, just like other animals are compared to humans now. So it's understandable that people are worried. They instinctively realize whats going on, but make up bullshit to delude themselves from the fact that is the endless human stupidity.

jstarfish|2 years ago

> Sure there might be some uproar if AI can paint a picture of mohammed

It can. He's swole AF.

(Though I'm pretty sure that was just Muhammad Ali in a turban.)

> People have been motivated by their last president to drink bleach and died - just accept that there are those kind of people and move on for the rest of us.

Need-to-know basis exists for a reason. You're not being creative enough if you think offending people is the worst possible misuse of AI.

People drinking bleach or refusing vaccines is a self-correcting problem, but the consequences of "forbidden knowledge" frequently get externalized. You don't want every embittered pissant out there to be able to autogenerate a manifesto, a shopping list for Radio Shack and a lesson plan for building an incendiary device in response to a negative performance review.

Right now it's all fun exercises like "how can I make a mixed drink from the ingredients I have," but eventually some enterprising terrorist will use an uncensored model trained on chemistry data...to assist in the thought exercise of how to improvise a peroxide-based explosive onboard an airplane, using fluids and volumes that won't arouse TSA suspicion.

Poison is the other fun one; the kids are desperate for that inheritance money. Just give it time.

xyproto|2 years ago

AI models are essentialy knowledge and information, but in a different file format.

Books should not be burned, nobody should be shielded from knowledge that they are old enough to seek and information should be free.

pmarreck|2 years ago

> but isn't it rational to be a bit worried about this?

About as rational as worrying that my toddler will google "boobies", which is to say, being worried about something that will likely have no negative side effect. (Visual video porn is a different story, however. But there's at least some evidence to support that early exposure to that is bad. Plain nudity though? Nothing... Look at the entirety of Europe as an example of what seeing nudity as children does.)

Information is not inherently bad. Acting badly on that information, is. I may already know how to make a bomb, but will I do it? HELL no. Are you worried about young men dealing with emotional challenges between the ages of 16 and 28 causing harm? Well, I'm sure that being unable to simply ask the AI how to help them commit the most violence won't stop them from jailbreaking it and re-asking, or just googling, or finding a gun, or acting out in some other fashion. They likely have a drivers' license, they can mow people down pretty easily. Point is, there's 1000 things already worse, more dangerous and more readily available than an AI telling you how to make a bomb or giving you written pornography.

Remember also that the accuracy cost in enforcing this nanny-safetying might result in bad information that definitely WOULD harm people. Is the cost of that, actually greater than any harm reduction from putting what amounts to a speed bump in the way of a bad actor?

MPSimmons|2 years ago

The danger from AI isn't the content of the model, it's the agency that people are giving it.

contravariant|2 years ago

I'm not sure how this is going to end, but one thing I do know is that I don't want a small number of giant corporations to hold the reins.

nilstycho|2 years ago

“I'm not sure how nuclear armament is going to end, but one thing I do know is that I don't want a small number of giant countries to hold the reins.”

Perhaps you think this analogy is a stretch, but why are you sure you don't want power concentrated if you aren't sure about the nature of the power? Or do you in fact think that we would be safer if more countries had weapons of mass destruction?

mardifoufs|2 years ago

Worried? Sure. But it sucks being basically at the mercy of some people in silicon valleys and their definition of moral and good.

MillionOClock|2 years ago

There is definitely a risk but I don't like the way many compagnies approach it: by entirely banning the use of their models for certain kind of content, I think they might be missing the opportunity to correctly align them and set the proper ethical guidelines for the use cases that will inevitably come out of them. Instead of tackling the issue, they let other, less ethical actors, do it.

Once example: I have a hard time finding an LLM model that would generate comically rude text without outputting outright disgusting content from time to time. I'd love to see a company create models that are mostly uncensored but stay within ethical bounds.

Salgat|2 years ago

These language models are just feeding you information from search engines like Google. The reason companies censor these models isn't to protect anyone, it's to avoid liability/bad press.

jrm4|2 years ago

AI Safety in a general sense?

Literally no. None at all.

I teach at University with a big ol' beautiful library. There's a Starbucks in it, so they know there's coffee in it.

But ask my students for "legal ways they can watch the tv show the Office" and the big building with the DVDs and also probably the plans for nuclear weapons and stuff never much comes up.

(Now, individual bad humans leveraging the idea of AI? That may be an issue)

waynesonfire|2 years ago

I'm not smart enough to articulate why censorship is bad. The argument however intuitively seems similiar to our freedom of speech laws.

A censored model feels to me like my freedom of speech is being infringed upon. I am unable to explorer my ideas and thoughts.

paxys|2 years ago

The AI isn't creating a new recipe on its own. If a language model spits something out it was already available and indexable on the internet, and you could already search for it. Having a different interface for it doesn't change much.

gojomo|2 years ago

> "If a language model spits something out it was already available and indexable on the internet"

This is false in several aspects. Not only are some models training on materials that are either not on the internet, or not easy to find (especially given Google's decline in finding advanced topics), but they also show abilities to synthesize related materials into more useful (or at least compact) forms.

In particular, consider there may exist topics where there is enough public info (including deep in off-internet or off-search-engine sources) that a person with a 160 IQ (+4SD, ~0.0032% of population) could devise their own usable recipes for interesting or dangerous effects. Those ~250K people worldwide are, we might hope & generally expect, fairly well-integrated into useful teams/projects that interest them, with occasional exceptions.

Now, imagine another 4 billion people get a 160 IQ assistant who can't say no to whatever they request, able to assemble & summarize-into-usable form all that "public" info in seconds compared to the months it'd take even a smart human or team of smart humans.

That would create new opportunities & risks, via the "different interface", that didn't exist before and do in fact "change much".

patrec|2 years ago

Of course it changes much. AIs can synthesize information in increasingly non-trivial ways.

In particular:

> If a language model spits something out it was already available and indexable on the internet,

Is patently false.

IshKebab|2 years ago

Not sure what you mean by "recipe" but it can create new output that doesn't exist on the internet. A lot of the output is going to be nonsense, especially stuff that cannot be verified just by looking at it. But it's not accurate to describe it as just a search engine.

madsbuch|2 years ago

but it does? to take the word recipe literal. there is nothing from for a llm synthesizing a new dish based on knowledge about the ingredients. who knows, it might even taste good (or at least better than what the average Joe cooks)

xeromal|2 years ago

To take an extreme example, child pornography is available on the internet but society does it's best to make it hard to find.