(no title)
D_Alex | 2 days ago
"At Anthropic, we build AI to serve humanity’s long-term well-being."
Why does Anthropic even deal with the Department of @#$%ing WAR?
And what does Amodei mean by "defeat" in his first paragraph?
D_Alex | 2 days ago
"At Anthropic, we build AI to serve humanity’s long-term well-being."
Why does Anthropic even deal with the Department of @#$%ing WAR?
And what does Amodei mean by "defeat" in his first paragraph?
jazzyjackson|2 days ago
temp8830|2 days ago
viking123|2 days ago
mapt|2 days ago
Balinares|2 days ago
parasubvert|2 days ago
gambiting|2 days ago
And I think the stakes have changed today - it's one thing to be making bombs which might or might not hit civilians, it's another to be making an AI system that gives humans a "score" that is then used by the military to decide if they live or die, as some systems already do("Lavender" used by the IDF is exactly this).
Even with the best intentions in mind, you don't know how the systems you built will be used by the governments of tomorrow.
moozooh|2 days ago
And nobody knows what he means by "defeat" because no journalist interrogates or pushes back on his grand statements when they hear it. Amodei has a history of claiming they need to "empower democracies with powerful AI" before [China] gets to it first but he never elaborates on why or what he expects to happen if the opposite comes to pass. I am assuming he means China will inevitably wage cyberwar on the US unless the US has a "nuclear deterrent" for that kind of thing. But seeing how this administration handles its own AI vendors, I am currently more afraid of such "empowered democracy" than China. Because of Greenland, because of "our hemisphere". Hard nope to that.
Oh, btw, Dario isn't against the DoD using Claude for mass surveillance outside of the US; he basically says it outright in the text. Humanity stops at Americans.