top | item 44489915

(no title)

MatekCopatek | 7 months ago

You can design a racist propaganda poster, put someone's face onto a porn pic or manipulate evidence with photoshop. Apart from super specific things like trying to print money, the tool doesn't stop you from doing things most people would consider distasteful, creepy or even illegal.

So why are we doing this now? Has anything changed fundamentally? Why can't we let software do everything and then blame the user for doing bad things?

discuss

order

dkyc|7 months ago

I think what changed is that we at least can attempt to limit 'bad' things with technical measures. It was legitimately technically impossible 10 years ago to prevent Photoshop from designing propaganda posters. Of course today's 'LLM safety' features aren't watertight either, but with the combination of 'input is natural language' plus LLM-based safety measures, there are more options today to restrict what the software can do than in the past.

The example you gave about preventing money counterfeiting with technical measures also supports this, since this was an easier thing to detect technically, and so it was done.

Whether that's a good thing or bad thing everyone has to decide for themselves, but objectively I think this is the reason.

bhk|7 months ago

In other words, to whatever extent they can control or manipulate the behavior of users, they will. In the limit t->∞, probably true.

MisterTea|7 months ago

What's hard to understand here? Those tools require skill and time to develop. AI makes things like those racist posters and revenge porn completely effortless and instant.