(no title)
spongebobstoes | 1 day ago
openai can deploy safety systems of their own making
from the military perspective this is preferable because they just use the tool -- if it works, it works, and if it doesn't, they'll use another one. with the anthropic model the military needs a legal opinion before they can use the tool, or they might misuse it by accident
this is also preferable if you think the government is untrustworthy. an untrustworthy government may not obey the contract, but they will have a hard time subverting safety systems that openai builds or trains into the model
losvedir|1 day ago
nawgz|1 day ago
mpalmer|1 day ago
- When has any AI company shipped "safeguards" that aren't trivially bypassed by mid bloggers? Just one example would be fine.
- The conventional wisdom is that OAI's R&D (including safety) is significantly behind Anthropic's.
- OpenAI is constantly starved for funding. They don't make money. They have every incentive to say yes to a deal that entrenches them into govt systems, regardless of the externalities