For me, Anthropics’ actions so far were the reason to lobby company internal for Claude and against Codex. I was successful, that’s going to be a few subscriptions.
Amodei is probablyright that current models aren't reliable enough for high-stakes decisions, but the more useful question is what the failure mode looks like in practice.
We're in an era that presents some novel problems.
I've read a few people this week discuss the consideration that Anthropic's behavior itself will likely impact Claude's training.
The concern there is that if Claude ingests news articles that show Anthropic behaving in a manner that clashes significantly with the values they want to instill in Claude, it could make training less effective.
niemandhier|3 days ago
7777777phil|3 days ago
thomassmith65|3 days ago
I've read a few people this week discuss the consideration that Anthropic's behavior itself will likely impact Claude's training.
The concern there is that if Claude ingests news articles that show Anthropic behaving in a manner that clashes significantly with the values they want to instill in Claude, it could make training less effective.
It's all very weird.
niemandhier|3 days ago
If what you said was true, the only way to achieve a superior AI would be to incorporate the virtuous one is aiming at.
That would solve so many of the conundrums of the field, I wish it was true.
thomassmith65|3 days ago
Informing the public of this dispute would highlight Anthropic's mission (ie: responsible AI), which is a market differentiator.
The Pentagon would crawl back, anyways, since Claude is the most effective model for programming tasks.
peddling-brink|3 days ago
Having not followed this closely at all, it seems like they are. If they weren’t the best, why would the Pentagon be begging like this.
rasz|2 days ago
We are in an AI bubble, public doesnt drive valuations