top | item 43025747

(no title)

tyho | 1 year ago

There are way better ways to watermark LLM output. It's easy to make it undetectable, which this is'nt.

discuss

order

shawnz|1 year ago

I recently worked on a steganographics project which could be useful for this problem. See: https://github.com/shawnz/textcoder

andai|1 year ago

That's really cool, you should repost the HN submission.

antognini|1 year ago

The issue with the standard watermark techniques is that they require an output of at least a few hundred tokens to reliably imprint the watermark. This technique would apply to much shorter outputs.

pava0|1 year ago

For example?

tyho|1 year ago

A crude way: To watermark: First establish a keyed DRBG. For every nth token prediction: read a bit from the DRBG for every possible token to label them red/black. before selecting the next token, set the logit for black tokens to -Inf, this ensures a red token will be selected.

To detect: Establish the same DRBG. Tokenize, for each nth token, determine the red set of tokens in that position. If you only see red tokens in lots of positions, then you can be confident the content is watermarked with your key.

This would probably take a bit of fiddling to work well, but would be pretty much undetectable. Conceptually it's forcing the LLM to use a "flagged" synonym at key positions. A more sophisticated version of a shiboleth.

In practice you might chose to instead watermark all tokens, less heavy handedly (nudge logits, rather than override), and use highly robust error correcting codes.