(no title)
tfehring | 1 day ago
My reading of this is that OpenAI's contract with the Pentagon only prohibits mass surveillance of US citizens to the extent that that surveillance is already prohibited by law. For example, I believe this implies that the DoW can procure data on US citizens en masse from private companies - including, e.g., granular location and financial transaction data - and apply OpenAI's tools to that data to surveil and otherwise target US citizens at scale. As I understand it, this was not the case with Anthropic's contract.
If I'm right, this is abhorrent. However, I've already jumped to a lot of incorrect conclusions in the last few days, so I'm doing my best to withhold judgment for now, and holding out hope for a plausible competing explanation.
(Disclosure, I'm a former OpenAI employee and current shareholder.)
gentleman11|1 day ago
Even on a personal level: OpenAI has changed it's privacy policy twice to let them gather data on me they weren't before. A lot of steps to disable it each time, tons of dark patterns. And the data checkout just bugs out too, it's a fake feature to hide how much they are using everything you type to them
tootie|23 hours ago
mannanj|11 hours ago
If we had a simple lookup community maintained system for this, would you use it? What do you think its design would need to be to be used, gain traction and be valuable?
I want this so bad.
eduction|23 hours ago
Why would we want to trade our constitution for, effectively, “rules Sam Altman came up with”?
_alternator_|1 day ago
kivle|1 day ago
wrsh07|1 day ago
And, I mean, if they don't, gpt 5.3 is going to be pretty good help
Given the volume fine tuning a small model is probably the only cost effective way to do it anyway
caseysoftware|22 hours ago
Third Party Doctrine makes trouble for us once again.
Eliminate that and MANY nightmare scenarios disappear or become exceptionally more complicated.
operator_nil|1 day ago
enceladus06|8 hours ago
Other nations including Israel and the PRC will also be working with their own implementations respectively because if they are not they know that everyone else is. So this is just basic game theory.
But the kicker is that 5y from now we will be able to run Codex 5.3x or Opus 4.6 on a $5000 mac studio, so nations states will want to immediately implement this kind of technology into their defense apparatus.
dataflow|23 hours ago
"Shall not be used as consistent with these authorities"?
So they shall only be used inconsistently with these authorities? That's the literal reading if you assume there's no typo.
Or did they forget a crucial comma that would imply they shall not use it, to the extent this provision is consistent with their authorities?
Or did they forget the comma but it was supposed to mean that they shall not use it, to the extent that not-doing so would be consistent with their authorities?
You gotta hand it to the lawyers, I'm not sure I could've thought of wording this deliberately confusing if they'd given me a million dollars.
irthomasthomas|1 day ago
Imagine arming chatgpt and letting it pick targets and launch missiles from clawdbot.
carefulfungi|13 hours ago
eoskx|1 day ago
xvector|1 day ago
He calls this exact scenario out in last night's interview: https://youtu.be/MPTNHrq_4LU
agb123|11 hours ago
davesque|1 day ago
derwiki|19 hours ago
pkaeding|1 day ago
godelski|1 day ago
<edit>
THAT'S EXACTLY WHAT DARIO WAS ARGUING and it is exactly why the DOD wanted to get around. They wanted to use Claude for all legal purposes and Anthropic said moral reasons.
Also notice the subtle language in OpenAI's red lines. "No use of OpenAI technology for mass *domestic* surveillance." We've seen how this was abused by the NSA already since normal communication in the Internet often crosses international lines. And what they couldn't get done that way they got around through allies who can spy on American citizens.
</edit>
I think we need to remember that legality != morality. It's our attempt to formalize morality but I think everyone sees how easy it is to skirt[0]
Call your senators. There's a bill in the senate explicitly about this. Here's the EFF's take [1]. IMO it's far from perfect but an important step. I think we should talk about this more. I have problems with it too, but hey, is anything in here preventing things from continuing to get better? It's too easy to critique and then do nothing. We've been arguing for over a decade, I'd rather take a small step than a step back. Let's also not forget WorldCoin[2]. World (blockchain)? World Network?I have no trust for Altman. His solution to distinguishing humans from bots is mass biometric surveillance. This seems as disconnected as the CEO of Flock or that Ring commercial.
Not to mention all the safety failures. Sora was released allowing real people to be generated? Great marketing. Glad they "fixed it" so quickly...
There's a lot happening now and it's happening fast. I think we need to be careful. We've developed systems to distribute power but it naturally wants to accumulate. Be it government power or email providers. The greater the power, the greater the responsibility. But isn't that why we created distributed power systems in the first place?
Personally I don't want autonomous unquestioning killbots under the control of one or a small number of people. Even if you don't believe the one in control now is not a psychopath (-_-) then you can still agree that it's possible for that type of person to get control. Power corrupts. Things like killing another person should be hard, emotionally. That's a feature, not a flaw. Soldiers questioning orders is a feature, not a flaw. By concentrating power you risk handing that power to those that do not feel. We're making Turnkey Tyranny more dangerous
[0] and law is probably our best attempt to make a formal system out of a natural language but I digress
[1] https://www.eff.org/deeplinks/2024/04/fourth-amendment-not-s...
[2] https://en.wikipedia.org/wiki/World_(blockchain)
popalchemist|1 day ago
mvdtnz|1 day ago