top | item 36534263

(no title)

jedbrown | 2 years ago

Language models don't understand anything, they just manipulate tokens. It is a much harder task to write a spec (that humans and courts can review if needed to determine is not infringement) and (with a separately trained tool) implement the spec. The tech just isn't ready and it's not clear that language models will ever get there.

What language models could do easily is to obfuscate better so the license violation is harder to prove. That's behavior laundering -- no amount of human obfuscation (e.g., synonym substitution, renaming variables, swapping out control structures) can turn a plagiarized work into one that isn't. If we (via regulators and courts) let the Altmans of the world pull their stunt, they're going to end up with a government-protected monopoly on plagiarism-laundering.

discuss

order

No comments yet.