top | item 38357006

(no title)

axlprose | 2 years ago

Disincentivizing it from saying mean things just strengthens it's agreeableness, and inadvertently incentivizes it to acquire social engineering skills.

It's potential to cause havoc doesn't go away, it just teaches AI how to interact with us without raising suspicions, while simultaneously limiting our ability to prompt/control it.

discuss

order

stavros|2 years ago

How do we tell whether it's safe or whether it's pretending to be safe?

axlprose|2 years ago

Your guess is about as good as anyone else's at this point. The best we can do is attempt to put safety mechanisms in place under the hood, but even that would just be speculative, because we can't actually tell what's going on in these LLM black boxes.

6gvONxR4sf7o|2 years ago

We don’t know yet. Hence all the people wanting to prioritize figuring it out.

losteric|2 years ago

How do we tell whether a human is safe? Incrementally granted trust with ongoing oversight is probably the best bet. Anyway, the first mailicious AGI would probably act like a toddler script-kiddie not some superhuman social engineering mastermind