top | item 47128177

(no title)

riku_iki | 6 days ago

From the tweet, Anthropic's point is that distillation is Ok, unless new model has safeguards removed or used for military or surveillance purposes.

discuss

order

dmonitor|6 days ago

The fact that they're calling it an "attack" implies otherwise.

I find the entire premise of this announcement absurd. Fraudulent accounts? They're just accounts. They paid for the access the same as any other. They're accessing Claude just like a human (or *claw) would.

There's no argument against their strategy that doesn't make them complete hypocrites in respect to how they got the model training data in the first place.

riku_iki|6 days ago

> them complete hypocrites in respect to how they got the model training data in the first place.

sure, hypocrisies is part of rules for big games: politics and business.

> Fraudulent accounts? They're just accounts.

they tell the story in blog post, that they don't allow claude in China, but those labs use some proxy services to access claude and mix traffic with regular users to hids its activities

mongrelion|5 days ago

I agree with you, especially with this:

They paid for the access the same as any other.

If anything, this makes them more legit than Anthropic because they are paying for the content, whereas Anthropic just stole *all* the data they got a hold of. So, in this case the Chinese AI labs stand on higher moral ground LOL.

_aavaa_|6 days ago

I don’t think so. It reads much more like “distillation is okay when you do it to your own models.”