top | item 45799059

(no title)

TIPSIO | 3 months ago

This will probably become a major problem with the Gemini APIs in enough time.

A customer does something crappy, e.g.: generates an image they aren't supposed to, and boom you're business Gmail and/or the recovery personal Gmail gone forever.

discuss

order

strangescript|3 months ago

there are built in moderation tools you should turn on if you have external customers generating images, or inputing data that might be sketch

samtheprogram|3 months ago

The example in this blog post, they did something recommended by Google and still got banned. Based on that, I'm not sure their built in moderation tools are enough insurance.

bhouston|3 months ago

It can be super hard to moderate before an image is generated though. People can write in cryptic language and then say decode this message and generate an image the result, etc. The downside of LLMs is that they are super hard to moderate because they will gladly arbitrarily encode input and output. You need to use an LLM as advanced as the one you are running in production to actually check if they are obscene.

ceejayoz|3 months ago

And these tools are perfect?