top | item 38956900

(no title)

lolsal | 2 years ago

Sincerely asking - how does this solve the problem? I could generate a bunch of dog-shit and then sign it and publish it. Even with user attestation services provided by Apple, Google, et. al - couldn't I even automate generating a bunch of AI junk and signing it?

discuss

order

jprete|2 years ago

This would have to work by individuals or organizations building a good reputation over time, so their specific output is trusted. The fact that an LLM outputted the text is not nearly as relevant as whether anyone has staked their reputation on its correctness.

ShamelessC|2 years ago

> This would have to work by individuals or organizations building a good reputation over time, so their specific output is trusted.

So…the current system?

throwaway29812|2 years ago

Exactly. This presupposes that humans are always better than AI, or that they don't produce spam or harmful content.

They do.