(no title)
tfehring | 1 day ago
I have two qualms with this deal.
First, Sam's tweet [0] reads as if this deal does not disallow autonomous weapons, but rather requires "human responsibility" for them. I don't think this is much of an assurance at all - obviously at some level a human must be responsible, but this is vague enough that I worry the responsible human could be very far out of the loop.
Second, Jeremy Lewin's tweet [1] indicates that the definitions of these guardrails are now maintained by DoW, not OpenAI. I'm currently unclear on those definitions and the process for changing them. But I worry that e.g. "mass surveillance" may be defined too narrowly for that limitation to be compatible with democratic values, or that DoW could unilaterally make it that narrow in the future. Evidently Anthropic insisted on defining these limits itself, and that was a sticking point.
Of course, it's possible that OpenAI leadership thoughtfully considered both of these points and that there are reasonable explanations for each of them. That's not clear from anything I've seen so far, but things are moving quickly so that may change in the coming days.
[0] https://x.com/sama/status/2027578652477821175
[1] https://x.com/UnderSecretaryF/status/2027594072811098230
syllogism|1 day ago
Government: "Anthropic, let us do whatever we want"
Anthropic: "We have some minimal conditions."
Government: "OpenAI, if we blast Anthropic into the sun, what sort of deal can we get?"
OpenAI: "Uh well I guess I should ask for those conditions"
Government: blasts Anthropic into the sun "Sure whatever, those conditions are okay...for now."
By taking the deal with the DoW, OpenAI accepts that they can be treated the same way the government just treated Anthropic. Does it really matter what they've agreed?
WarmWash|1 day ago
It looks like Anthropic likely wanted to be able to verify the terms on their own volition whereas OpenAI was fine with letting the government police themselves.
From the DoD perspective they don't want a situation, like, a target is being tracked, and then the screen goes black because the Anthropic committee decided this is out of bounds.
xpe|1 day ago
spondyl|1 day ago
While I don't live in the US, I could imagine the US government arguing that third party doctrine[0] means that aggregation and bulk-analysis of say; phone record metadata is "lawful use" in that it isn't /technically/ unlawful, although it would be unethical.
Another avenue might also be purchasing data from ad brokers for mass-analysis with LLMs which was written about in Byron Tau's Means of Control[1]
[0] https://en.wikipedia.org/wiki/Third-party_doctrine
[1] https://www.penguinrandomhouse.com/books/706321/means-of-con...
az226|1 day ago
estearum|1 day ago
DoD is now trying to strongarm Anthropic into changing the deal that they already signed!
tedd4u|1 day ago
xpe|1 day ago
I’m not accusing the above commenter of deception; I’m merely saying reasonable people are skeptical. There are classic game theory approaches to address cooperation failure modes. We have to use them. Apologies if this seems cryptic; I’m trying to be brief. It if doesn’t make sense just ask.