(no title)
simias | 2 years ago
I am one of these ninnies I guess, but isn't it rational to be a bit worried about this? When we see the deep effects that social networks have had on society (both good and bad) isn't it reasonable to feel a bit dizzy when considering the effect that such an invention will have?
Or maybe your point is just that it's going to happen regardless of whether people want it or not, in which case I think I agree, but it doesn't mean that we shouldn't think about it...
rafaelmn|2 years ago
I'm almost certain that I can give you components and instructions on how to build a nuclear bomb and the most likely thing that would happen is you'd die of radiation poisoning.
Most people have trouble assembling ikea furniture, giving them a halucination prone LLM they are more likely to mustard gas themselves than synthesize LSD.
People with necessary skills can probably get access to information in other ways - I doubt LLM would be an enabler here.
unknown|2 years ago
[deleted]
barrysteve|2 years ago
0xDEAFBEAD|2 years ago
An LLM doesn't just provide instructions -- you can ask it for clarification as you're working. (E.g. "I'm on step 4 and I ran into problem X, what should I do?")
This isn't black and white. Perhaps given a Wikihow-type article on how to build a bomb, 10% succeed and 90% die of radiation poisoning. And with the help of an LLM, 20% succeed and 80% die of radiation poisoning. Thus the success rate has increased by a factor of 2.
We're very lucky that terrorists are not typically the brightest bulbs in the box. LLMs could change that.
EGreg|2 years ago
THE ISSUE ISNT ACCESS TO KNOWLEDGE! And alignment isn’t the main issue.
The main issue is SWARMS OF BOTS running permissionlessly wreaking havoc at scale. Being superhuman at ~30 different things all the time. Not that they’re saying a racist thought.
esafak|2 years ago
croes|2 years ago
notatoad|2 years ago
surely there's more creative and insidious ways that AI can disrupt society than by showing somebody a guide to making a bomb that they can already find on google. blocking that is security theatre on the same level as taking away your nail clippers before you board an airplane.
simias|2 years ago
I feel like AI can amplify this issue tremendously. That's my main concern really, not people making pipe bombs or writing rape fanfiction.
RockRobotRock|2 years ago
downWidOutaFite|2 years ago
mitchitized|2 years ago
TRIGGERED
anonyfox|2 years ago
I am not willing to sacrifice even 1% of capabilities of the model for sugarcoating sensibilities, and currently it seems that GPT4 is more and more disabled because of the moderation attempts... so I basically _have to_ jump ship once a competitor has a similar base model that is not censored.
Even the bare goal of "moderating it" is wasted time, someone else (tm) will ignore these attempts and just do it properly without holding back.
People have been motivated by their last president to drink bleach and died - just accept that there are those kind of people and move on for the rest of us. We need every bit of help we can get to solve real world problems.
jona-f|2 years ago
jstarfish|2 years ago
It can. He's swole AF.
(Though I'm pretty sure that was just Muhammad Ali in a turban.)
> People have been motivated by their last president to drink bleach and died - just accept that there are those kind of people and move on for the rest of us.
Need-to-know basis exists for a reason. You're not being creative enough if you think offending people is the worst possible misuse of AI.
People drinking bleach or refusing vaccines is a self-correcting problem, but the consequences of "forbidden knowledge" frequently get externalized. You don't want every embittered pissant out there to be able to autogenerate a manifesto, a shopping list for Radio Shack and a lesson plan for building an incendiary device in response to a negative performance review.
Right now it's all fun exercises like "how can I make a mixed drink from the ingredients I have," but eventually some enterprising terrorist will use an uncensored model trained on chemistry data...to assist in the thought exercise of how to improvise a peroxide-based explosive onboard an airplane, using fluids and volumes that won't arouse TSA suspicion.
Poison is the other fun one; the kids are desperate for that inheritance money. Just give it time.
xyproto|2 years ago
Books should not be burned, nobody should be shielded from knowledge that they are old enough to seek and information should be free.
pmarreck|2 years ago
About as rational as worrying that my toddler will google "boobies", which is to say, being worried about something that will likely have no negative side effect. (Visual video porn is a different story, however. But there's at least some evidence to support that early exposure to that is bad. Plain nudity though? Nothing... Look at the entirety of Europe as an example of what seeing nudity as children does.)
Information is not inherently bad. Acting badly on that information, is. I may already know how to make a bomb, but will I do it? HELL no. Are you worried about young men dealing with emotional challenges between the ages of 16 and 28 causing harm? Well, I'm sure that being unable to simply ask the AI how to help them commit the most violence won't stop them from jailbreaking it and re-asking, or just googling, or finding a gun, or acting out in some other fashion. They likely have a drivers' license, they can mow people down pretty easily. Point is, there's 1000 things already worse, more dangerous and more readily available than an AI telling you how to make a bomb or giving you written pornography.
Remember also that the accuracy cost in enforcing this nanny-safetying might result in bad information that definitely WOULD harm people. Is the cost of that, actually greater than any harm reduction from putting what amounts to a speed bump in the way of a bad actor?
MPSimmons|2 years ago
contravariant|2 years ago
nilstycho|2 years ago
Perhaps you think this analogy is a stretch, but why are you sure you don't want power concentrated if you aren't sure about the nature of the power? Or do you in fact think that we would be safer if more countries had weapons of mass destruction?
mardifoufs|2 years ago
MillionOClock|2 years ago
Once example: I have a hard time finding an LLM model that would generate comically rude text without outputting outright disgusting content from time to time. I'd love to see a company create models that are mostly uncensored but stay within ethical bounds.
Salgat|2 years ago
jrm4|2 years ago
Literally no. None at all.
I teach at University with a big ol' beautiful library. There's a Starbucks in it, so they know there's coffee in it.
But ask my students for "legal ways they can watch the tv show the Office" and the big building with the DVDs and also probably the plans for nuclear weapons and stuff never much comes up.
(Now, individual bad humans leveraging the idea of AI? That may be an issue)
waynesonfire|2 years ago
A censored model feels to me like my freedom of speech is being infringed upon. I am unable to explorer my ideas and thoughts.
paxys|2 years ago
gojomo|2 years ago
This is false in several aspects. Not only are some models training on materials that are either not on the internet, or not easy to find (especially given Google's decline in finding advanced topics), but they also show abilities to synthesize related materials into more useful (or at least compact) forms.
In particular, consider there may exist topics where there is enough public info (including deep in off-internet or off-search-engine sources) that a person with a 160 IQ (+4SD, ~0.0032% of population) could devise their own usable recipes for interesting or dangerous effects. Those ~250K people worldwide are, we might hope & generally expect, fairly well-integrated into useful teams/projects that interest them, with occasional exceptions.
Now, imagine another 4 billion people get a 160 IQ assistant who can't say no to whatever they request, able to assemble & summarize-into-usable form all that "public" info in seconds compared to the months it'd take even a smart human or team of smart humans.
That would create new opportunities & risks, via the "different interface", that didn't exist before and do in fact "change much".
patrec|2 years ago
In particular:
> If a language model spits something out it was already available and indexable on the internet,
Is patently false.
IshKebab|2 years ago
madsbuch|2 years ago
xeromal|2 years ago