top | item 46920671

(no title)

I_am_tiberius | 22 days ago

If it's in your power, make sure user prompts and llm responses are never read, never analyzed and never used for training - not anonymized, not derived, not at all.

discuss

order

surajrmal|22 days ago

No single person other than Sam Altman can stop them from using anonymized interactions for training and metrics. At least in the consumer tiers.

satvikpendem|22 days ago

It's a little too late for that, all the models train on prompts and responses.