top | item 47173642

(no title)

qaid | 3 days ago

I was reading halfway thru and one line struck a nerve with me:

> But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.

So not today, but the door is open for this after AI systems have gathered enough "training data"?

Then I re-read the previous paragraph and realized it's specifically only criticizing

> AI-driven domestic mass surveillance

And neither denounces partially autonomous mass surveillance nor closes the door on AI-driven foreign mass surveillance

A real shame. I thought "Anthropic" was about being concerned about humans, and not "My people" vs. "Your people." But I suppose I should have expected all of this from a public statement about discussions with the Department of War

discuss

order

nubg|3 days ago

I think it's phrased just fine. It's not up to Dario to try to make absolute statements about the future.

m000|3 days ago

How about the present and his personal beliefs?

"I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries."

This reads like his objection is not on "autocratic", but on "adversaries". Autocratic friends & family are cool with him. A clear wink to a certain administration with autocratic tendencies.

taurath|3 days ago

> It's not up to Dario to try to make absolute statements about the future.

Thats insane to say, given that he's literally acting in the public sphere as the mouth of Sauron for how AI will grow so effective as to destroy almost everyone's jobs and AGI will take over our society and kill us all.

andrewljohnson|3 days ago

This doesn’t read to me like it was personally written by one person. It’s not Dario we should read this as being written by, it’s Anthropic as an entity.

lm28469|3 days ago

He does it all the time when it helps selling his products though, strange

titzer|2 days ago

It's not called The Department of War.

It's just incredible to me that people think this is some kind of bold statement defying the administration when it is absolutely filled with small and medium capitulations, laying out in numerous examples how they just jumped right in bed with the military.

And no one seems disturbed by the blatant Orwellian doublespeak throughout. "We thoroughly support the mission of the Department of War"--because War is Peace.

nhinck2|3 days ago

He does it all the time.

camillomiller|3 days ago

And yet he’s quite happy to make just that when it’s meant to drum you up his own product for investors

trvz|3 days ago

He’s one of the most influential people when it comes to what future we’ll have. Yes, it’s up to him.

samtheDamned|3 days ago

I'm glad I'm not alone in finding the specific emphasis on drawing the line at domestic surveillance a bit odd. Later they also state they are against "provid[ing] a product that puts America’s warfighters and civilians at risk" (emphasis mine). Either way I'm glad they have lines at all, but it doesn't come across as particularly reassuring for people in places the US targets (wedding hosts and guests for example).

MetaWhirledPeas|2 days ago

> I'm not alone in finding the specific emphasis on drawing the line at domestic surveillance a bit odd

We've always been OK with this in the pre-AI era. (See the plot line of dozens of movies where the "good" government spies on the "bad" one.) Heck we've even been OK with domestic surveillance. (See "The Wire".) Has something changed, or are we just now realizing how it's problematic?

jazzyjackson|3 days ago

See also: the entire history of Silicon Valley

When Google Met Wikileaks is a fun read, billionaire CEOs love to take Americas side.

ghshephard|3 days ago

I think it goes without saying that ones the systems are reliable, fully-autonomous weapons will be unleashed on the battlefield. But they have to have safeguards to ensure that they don't turn on friendly forces and only kill the enemy. What Anthropic is saying, is that right now - they can't provide those assurances. When they can - I suspect those restrictions will be relaxed.

asdff|3 days ago

US military cannot even offer those assurances themselves today. I tried to look up the last incident of friendly fire. Turns out it was a couple hours ago today, when US military shot down a DHS drone in Texas.

Onewildgamer|3 days ago

Fully autonomous weapons are a danger even if we can reliably make it happen with or without AI.

It essentially becomes a computer against human. And such software if and when developed, who's going to stop it from going to the masses? imagine a software virues/malwares that can take a life.

I'm shocked very few are even bothered about this and is really concerning that technology developed for the welfare could become something totally against humans.

TaupeRanger|3 days ago

What else would you expect? The military is obviously going to develop the most powerful systems they can. Do you want a tech company to say “the military can never use our stuff for autonomous systems forever, the end”? What if Anthropic ends up developing the safest, most cost effective systems for that purpose?

crabmusket|3 days ago

> Do you want a tech company to say “the military can never use our stuff for autonomous systems forever, the end”?

Yes. Absolutely.

goatlover|3 days ago

I'd prefer companies not help the military develop the most powerful weapons possible given we're in the age of WMDs, have already had two devastating world wars and a nuclear arms race that puts humanity under permanent risk.

harimau777|2 days ago

Yes, that's exactly what I want them to say.

archagon|3 days ago

Yes, I absolutely don’t want tech companies to use the money I pay them to harm people. How is that remotely controversial?

asadotzler|2 days ago

>Do you want a tech company to say “the military can never use our stuff for autonomous systems forever, the end”?

Yes. Yes, that's precisely what we want.

skeledrew|3 days ago

Well, if they hadn't stated that were that far in line with the administration's ideals, they would likely already be fully blacklisted as enemies of the state. Whether they agree with what they're saying or not, they're walking on egg shells.

sithamet|3 days ago

Also, as someone from a country that has been attacked and dragged into war, I would prefer machines fighting (and being destroyed autonomously) rather than my people dying, nor people from any nation that came to help.

That's as Anthropic as it gets if your nerve expands a little bit further than your HOA.

mrtksn|2 days ago

What do you think it will happen once the machines fight off? Do you think that the losing side will be like "oh no our machines lost, then better we give our things to the winning machines"?

After your machines are destroyed you will be fighting machines or machines will extract and constantly optimize you. They will either exterminate you or make you busy enough not to have time for resistance. If you have something of value they will take it away. The best case scenario is to make you join the owners of the machines and keep you busy so that you don't have time to raise concerns about your 2nd class citizenship.

Quarrelsome|2 days ago

> would prefer machines fighting (and being destroyed autonomously) rather than my people dying

But the reality is more like the surprise of a bunch of submersible kill bots terrorising a coastal city and murdering people. Even in bot-first combat, at some point one side of bots wins either totally, allowing it to kill people indiscriminately or partially, which forces the team on the back foot to pivot to guerilla warfare and terror attacks, using robots.

gambiting|3 days ago

>> I would prefer machines fighting (and being destroyed autonomously) rather than my people dying

What makes you think in any war the machines would stop at just fighting other machines?

kingkawn|2 days ago

What about machines slaughtering the population without pause?

preisschild|3 days ago

The more likely scenario will be "your people" dying in a war against machines that don't tend to disregard illegal orders.

orochimaaru|3 days ago

They’re being used today by the military. So, they are never going to be against mass surveillance. They can scope that to be domestic mass surveillance though.

rafark|3 days ago

I said exactly this a few days ago elsewhere. It’s disappointing that they (and often other American companies) seem to restrict their “respect” and morals to Americans only. Or maybe it’s just semantics or context because the topic at hand is about americans? I don’t know but it gives “my people are more important than your people”, exactly as you said in your last paragraph

nielsole|3 days ago

You gotta keep in mind that the primary goal of this statement is to avert the invocation of the defense production act.

He is trying to win sympathies even (or especially?) among nationalist hawks.

01100011|3 days ago

We already have traditional CV algorithms and control systems that can reliably power autonomous weapons systems and they are more deterministic and reliable than "AI" or LLMs.

kgwxd|3 days ago

But then a person can be blamed for the outcome. We can't have that!

asaddhamani|3 days ago

They also posted on Instagram saying autonomous killing would hurt Americans. So non American people don’t matter?

yujzgzc|3 days ago

> the door is open for this after AI systems have gathered enough "training data"?

Sounds more like the door is open for this once reliability targets are met.

I don't think that's unreasonable. Hardware and regular software also have their own reliability limitations, not to mention the meatsacks behind the joystick.

altpaddle|3 days ago

Unfortunately I think the writing is clearly on the wall. Fully autonomous weapons are coming soon

not_the_fda|3 days ago

And that's the end of democracy. One of the safe guards of democracy is a military that is trained to not turn against the citizens. Once a government has fully autonomous weapons its game over. They can point those weapons at the populous at the flip of the switch.

levocardia|3 days ago

Right - for the same reasons a Waymo is safer than a human-driven car, an autonomous fighter drone will ultimately be deadlier than a human-flown fighter jet. I would like to forestall that day as long as possible but saying "no autonomous weapons ever" isn't very realistic right now.

tempestn|3 days ago

If they had access to them in Ukraine, both sides would already be using them I expect. Right now jamming of drones is a huge obstacle. One way it's dealt with is to run literal wired drones with massive spools of cable strung out behind them. A fully autonomous drone would be a significant advantage in this environment.

I'm not making a values judgment here, just saying that they will absolutely be used in war as soon as it's feasible to do so. The only exception I could see is if the world managed to come together and sign a treaty explicitly banning the use of autonomous weapons, but it's hard for me to see that happening in the near future.

Edit: come to think of it, you could argue a landmine is a fully autonomous weapon already.

scottyah|3 days ago

It's only Anthropic with their current models saying no. Fully autonomous weapons have been created, deployed, and have been operational for a long time already. The only holdout I've ever heard of is for the weapons that target humans.

Honestly, even landmines could easily be considered fully autonomous weapons and they don't care if you're human or not.

Aeolun|3 days ago

Is it seriously called the department of war now? Did they change that from DoD?

lkbm|2 days ago

The Executive branch has de facto renamed it. Legally, the name is still Department of Defense, as that's set by Congress.

Think of it as a marketing term, I guess.

Sebguer|3 days ago

illegally, but yes

urikaduri|3 days ago

The Ghandi of the corporate world is yet to be found

scottyah|3 days ago

Considering he slept naked with his grandniece (he was in his 70s, she was 17), I'd say there are a lot of them in the corporate world. Though probably more in politics.

jamesmcq|3 days ago

So AI systems are not reliable enough to power fully autonomous weapons but they are reliable enough to end all white-collar work in the next 12 months?

Odd.

serf|3 days ago

do you really need to be told there is a difference in 'magnitude of importance' between the decision to send out an office memo and the decision to strike a building with ordinance?

a lot of white collar jobs see no decision more important than a few hours of revenue. that's the difference: you can afford to fuck up in that environment.

gedy|3 days ago

Shh! there's a lot of money riding on this bet, ahem.

nhinck2|3 days ago

> And neither denounces partially autonomous mass surveillance nor closes the door on AI-driven foreign mass surveillance

You have to be deliberately naive in a world where five eyes exists to somehow believe that "foreign" mass surveillance won't be used domestically.

aidis9136264|3 days ago

Enemies will have AI powered weapons. We need to be at the cutting edge of capability.

Throwagainaway|3 days ago

I don't know where you might get your info from but Anthropic has only denied using Autonomous AI to kill humans without anyone pressing a button/having some liabilty on and mass surveillance.

I don't think that your point makes sense especially when you can have enemies within your own administration/country who can use the same weapons to hunt you.

I don't think that the people operating the drones are a bottleneck for a war between your country and your enemies but rather its a bottleneck for a war between your country and its people. The bottleneck is of morality as you would find less people willing to do the same atrocities to their own community but terminator style AI is an orphan with no community ie. it has no problem following any orders from the govt. and THIS is the core of the argument because Anthropic has safeguards to reject such orders and DOD is threatening to essentially kill the company by invoking many laws to force it to give.

ImPostingOnHN|2 days ago

US-controlled, AI-powered, fully-autonomous killbots are more likely to be used sooner against US civilians before any sort of invading enemy.

Are you prepared to be the "enemy" of these soulless killbots? Do you personally have AI powered-weapons? You need to be at the cutting edge of capability, right?

sithamet|3 days ago

What a shame, indeed. Chinese and Russians would never do something like that and hurt either their or your people, too

MattDamonSpace|3 days ago

The sentence prior explicitly says this. There’s no dishonesty here.

“Even fully autonomous weapons (…) may prove critical for our national defense”

FWIW there’s simply no way around this in the end. If your even attempts to create such weapons, the only possible defensive counter is weapons of a similar nature.

blitzar|3 days ago

To stop a bullet flying at you you need a shield not another bullet.

mgraczyk|3 days ago

Anthropic doesn't forbid DoW from using the models for foreign surveillance. It's not about harming others, it's about doing what is best for humanity in the long run, all things considered. I personally do not believe that foreign surveillance is automatically harmful and I'm fine with our military doing it

nextaccountic|3 days ago

If we are talking about what's best for humanity in the long run.. thinking about human values in general, what makes American citizens uniquely deserving of privacy rights, in ways that citizens of other countries are not?

Snowden revealed that every single call on Bahamas were being monitored by NSA [1]. That was in 2013. How would this be any worse if it were US citizens instead?

(Note, I myself am not an US citizen)

Anyway, regardless of that, the established practice is for the five eyes countries to spy on each other and share their results. This means that the UK can spy on US citizens, the US can spy on UK citizens, and through intelligence sharing they effectively spy on their own citizens. That's what supporting "foreign surveillance" will buy you. That was also revealed in 2013 by Snowden [2]

[1] https://theintercept.com/2014/05/19/data-pirates-caribbean-n...

[2] https://www.theguardian.com/world/2013/dec/02/nsa-files-spyi...

827a|3 days ago

If the United States is ever, in the future, at war with an adversary using truly autonomous and functional killing machines; you may find yourself praying that we have our own rather than praying human nature changes. Of course, we must strive for this to never happen; but carrying a huge stick seems to be the most effective way to reduce human death and suffering from armed conflict.

RGamma|3 days ago

Given how unstable and aggressive the US government is at the moment others having these weapons seems to be a good idea for balance. Not sure you are aware of the damage Trump is inflicting on international relations.

But personally I wouldn't like to die because some crackpot with the right connections can will rest-of-world to that fate, no matter their affiliation. This escalation of destructive power and the carelessness with which it is justified pretty disheartening to see. Good times create bad people?

gizzlon|3 days ago

> but carrying a huge stick seems to be the most effective way to reduce human death and suffering from armed conflict.

Citation needed. I believe there's at least some research showing the opposite: military buildup leads to a higher risk of military conflict

remarkEon|3 days ago

As a practical matter, it makes zero sense for a tech company with perhaps laudable goals and concerns about humanity to have any control whatsoever over the use of a product it sells for war. You don't like what it could potentially be used for, or are having second thoughts about being involved in war making at all, don't sell it, which appears to be Amodei's position now. That's perhaps laudable, from a certain point of view.

On the other hand, your position is at best misguided and at worst hopelessly naive. The probability that adversaries of the United States, potential or not, are having these discussions about AI release authority and HITL kill chains is basically zero, other then doing so at a technical level so they get them right. We're over the event horizon already, and into some very harsh and brutal game theory.

zaptheimpaler|3 days ago

They didn’t sell it no strings attached, they sold it with explicit restrictions in their contract with DoW and the DoW agreed to that contract. Their mistake was assuming they operate in a country where rule of law is respected, clearly not the case anymore given the 1000s of violations in the last year.