top | item 47183787

(no title)

w10-1 | 2 days ago

They do require that you allow them to use your name publicly.

They are silent on whether you can prohibit them from training on your input, so I assume you can.

My guess is, if even 10% of maintainers forget to disable training, then Anthropic will have a most excellent source of how really good developers approach problems that can be fed back into the model. That could improve things for everyone.

Of course, the whole purpose of a trial is to induce dependence on the service. Let’s hope that doesn’t reduce the skill of those maintainers. If open source code gets better as a result, that’s good for all.

discuss

order

TuxSH|2 days ago

> By accepting a Program subscription, you grant Anthropic permission to identify you publicly as a Program recipient, including by referencing your name, GitHub username, and associated open source project(s).

I was tempted about applying but that part is everything but nice and I think I'll just pass

saulpw|2 days ago

There's no non-disparagement clause, so how about you left them use your name etc, and then you can come out in public and say those mean things and shame/embarrass them.

trollbridge|2 days ago

Of course they're going to train on open-source input (not like you could stop them).

And of course they're also going to train on your private inputs. It's right there in the TOS.

lostmsu|2 days ago

> And of course they're also going to train on your private inputs. It's right there in the TOS.

Anthropic actually says they won't train on your private inputs on paid plans as long as you opted out. Unlike Google and OpenAI.