top | item 47175931

Google workers seek 'red lines' on military A.I., echoing Anthropic

288 points| mikece | 4 days ago |nytimes.com

https://notdivided.org/

139 comments

order

Xeronate|4 days ago

I understand the vision, but how does this work on a global scale. e.g. American employees refuse to build this, but China's don't.

Edit: I originally ended with "What would have happened if Germany had a nuclear bomb and America didn't?", but I think it distracted from the point I was trying to make so moving this to an edit. I'm not trying to ask "is the US the bad guy". I'm trying to ask how to balance personal anti war sentiments with the realities of the world (specifically in this case keeping up in an arms race).

protocolture|4 days ago

>American employees refuse to build this, but China's don't.

How about you articulate the threat from an AI powered China to people outside of AI powered China and discuss potential methods to counter that, instead of insisting capabilities be developed just in case.

>is the US the bad guy

Yes

>I'm trying to ask how to balance personal anti war sentiments with the realities of the world

Insist on open information, never surrender consent willingly and demand justification for everything. As always.

skybrian|4 days ago

Not to worry, xAI would do it even if Google didn't.

Also, Anthropic didn't actually refuse to work on all military stuff. They have some conditions, which isn't the same thing.

maxglute|4 days ago

Well game theory aside, the reality is if PRC weaponizes AI, there's a chance they may use it in the future. If US weaponizes AI, they'll definitely be using it to kill people within the calendar year. Employees have to factor that in, for PRC worker their killing people is hypothetical, for US worker, it's inevitable.

khafra|4 days ago

Do the same thing we did with the nuclear arms race: Treaties to limit and control it.

Obviously, we would have had more political leverage if our leaders had started working on a treaty before they crossed enough moral red lines to start a tech revolt, but we did not elect the sort of leaders that would do that.

dheera|4 days ago

> American employees refuse to build this, but China's don't.

It's not American employees vs. China employees. No need to villainize China at every opportunity. Most Chinese employees are more similar to American employees than you think.

It's {top candidates who have their pick of employers} have the luxury to refuse to build this.

Mid-tier dude who can't land a job at any of the top AI companies and can code with Cursor and trying to pay their rent or medical bills will absolutely build AI for the military in return for having their rent paid.

This is regardless of whether it is in the US or China.

squibonpig|2 days ago

As I understand anthropic refused two things: domestic surveillance in the US and weapons automated such that they could kill without a human in the loop. I don't think either of these would hamper the US against China in any meaningful way.

nashashmi|4 days ago

The reason it works is when you have less participants in an effort you have slower progress in that endeavor. Brilliant employees prohibiting their entire org to not support the development of bad things prevents less brilliant employees from doing bad things.

It is sort of like computers are amazing but can also be a privacy nightmare. Software engineers don’t help or coordinate with black hat hackers. So black hat hackers have a harder time refining their systems.

impossiblefork|4 days ago

Well, then military use of some US commercial AI systems will be subject to minimal restrictions while Chinese AI might not be.

Thus some people avoid having to see their work used for killing people or in mass surveillance, so that they're actually able to contribute to AI development instead of leaving the field.

baq|4 days ago

That’s exactly why I think the principled position is naive in a tragedy of the commons situation we’re in - it isn’t a sci fi story with a happy ending, it’s the Manhattan project and 70+ years ago nazi and japanese data centers doing foundational model training would’ve been bombed to smithereens at any cost.

pjjpo|3 days ago

Don't need to use China even, Microsoft, or Palantir, etc will continue to support the US military, likely using Google technology in the process (Guava, gRPC maybe?, k8s assuredly? etc).

Sorry but if you truly believe in technology not using in bad context, the only way to avoid it is to change careers. The issue with news like this is it's hard to actually trust the protesters, they probably are happy to clear their conscience personally while continuing to reap the benefits of living in the tech industry. Have your cake and eat it too.

Sometimes people do quit - they're probably the ones you want to hire if you care about ethics. Most don't though.

jmyeet|4 days ago

I'm going to give a shout out here to an episode of the excellent podcast Hardcore History, specifically Episode 59: The Destroyer of Worlds [1].

The development of the atomic bomb created a debate in American policy circles about how the US should react. Within a few years, the same debate occurred over developing thermonuclear weapons. The same question kept coming up: what if the enemy has these weapons and we don't?

Dan Carlin's position, which I happen to agree with, is that America chose wrong. It became both belligerent and paranoid to a degree that just wasn't the case before WW2. If you look up the history of regime changes at the hands of the US [2] then you can see it went into overdrive after 1945.

Part of the problem here I think is projection, the psychological phenomenon. It's also a cultural phenomenon. So, for example, when you have a historically oppressed people who are being potentially freed, the oppressors will fret that the formerly oppressed will rise up and kill them. This is projection.

We saw this exact thing play out with Emancipation. There was no mass revenge violence by the former slaves. If anything, there was more violence by the former oppressors against freed slaves and a system that excuded the violence (eg the Colfax massacre [3]).

I think nations can be guilty of this too. The US sees any other global power as a potential hegemonic, imperialist power that will dominate and exploit everyone around them because, well, that's what we do.

We also see this in how we view AI as a resource. We see it as something to be owned and gatekept such that some US company will become insanely wealthy further extracting every last dollar from every person on Earth.

So your comment belays a common fear that China will displace us as a global hegemonic, imperialist power despite there being zero evidence that China behaves in that fashion. American propaganda runs deep and the projection is strong so this will immediately cause some to say "but Tibet" or "but Taiwan" without really knowing anything any of those situations.

As just one example, the One China policy is the official policy of the US, the EU and almost every nation on Earth. "They might invade" I preemptively hear. They won't, partly because they can't but really because they don't need to. If the world already has the One China policy, why do anything? Oh and I said they can't because they can't. They don't have that military capability. If you think that, you don't know anything about war. Crossing 100 miles of ocean to invade an island with a army of over 500,000 is simply not possible.

Let me put it this way: the 17 or so miles of the English Channel stopped the German war machine despite having millions of soldiers.

Anyway, back to the point: this whole argument of "what if China does military AI?" is (IMHO) projection. If anything, China has shown that they won't allow a US tech company to control and gatekeep AI (eg by rreleasing DeepSeek). And if China gets AI, they're more than likely to use it to further raise people out of poverty and automate away more menial jobs without making those displaced workers homeless.

[1]: https://www.dancarlin.com/product/hardcore-history-59-the-de...

[2]: https://en.wikipedia.org/wiki/United_States_involvement_in_r...

[3]: https://en.wikipedia.org/wiki/Colfax_massacre

CasualSuperman|4 days ago

With current leadership, I think we're closer to Germany in this analogy.

rozal|4 days ago

[deleted]

SpicyLemonZest|4 days ago

Is there any reason to think that autonomous weapons are a critical strategic capability? It's hard to see what an unpiloted drone can do that a remotely piloted drone can't, other than perhaps human rights violations.

moogly|4 days ago

If we're going to have to rely on self-regulation for this, we're already doomed.

mikestorrent|4 days ago

There is only self regulation, ultimately, at the top. I think it's still progress to see these groups specifically call out their moral hesitations, even if it doesn't go anywhere - it gives people ground to realize that others share their concerns. All movements, all progress starts from people putting their stance out there and getting a conversation going around the topic; that builds mindshare and eventually a demand for change.

Analemma_|4 days ago

Sure, but we’re currently so fucked that even self-regulation is clearly superior to kneeling to the Mad King and his drunkard Secretary of War.

sudonem|4 days ago

As much as I applaud the intention, the genie has been out of the bottle on this one for many years already.

taurath|4 days ago

There's always this comment, saying that its useless to possibly govern or resist advancement or development or use of weapons capable of indiscriminate killing.

If the world actually worked like they believe it does, if restraint were just not possible, the world would have been destroyed at least 3 documented times over.

Don't listen to them.

markus_zhang|4 days ago

Arguably it has always been there, considering the US military sponsored so many computing projects.

rvz|4 days ago

We already forgotten about this already? [0] Where was the open letter then?

Both companies (Google, OpenAI [0]) have defense contracts. At this point, the best course of action is to leave Google and OpenAI if you disagree with that (they won't).

[0] https://www.theguardian.com/technology/2025/jun/17/openai-mi...

jimmydoe|4 days ago

I say stay, and do a subtlety bad job there.

protocolture|4 days ago

The line should be "no" not "limited domestic use".

beanjuiceII|4 days ago

100 google employees wow

dietr1ch|4 days ago

And they'll be terminated by Jan 2027. Anything too scandalous will be done in secrecy thanks to code&project silos.

verdverm|4 days ago

every change starts with a few people, and then it grows

nojvek|4 days ago

US leaders have realized how much power China CCP has over its citizens. They want to do that. Same with EU.

What makes US US are their transparent integral markets and ability for information , people and goods to move freely.

By using AI for mass surveillance, information can’t move that freely. Free speech gets suppressed. Can’t call the emperor naked.

The Emporer is naked with a poopy diaper. But we can’t say that aloud.

RickJWagner|4 days ago

That seems counter intuitive to me.

Surveillance is information gathering, and synthesizing insights that were previously undiscovered. It makes more information, albeit not shared information.

I suppose it decreases the percentage of information that’s free. But it increases the amount of total information.

About free speech suppression, I don’t see any of that happening. If anything, there is too much being said from across the political spectrum. I’d be happy if people would say less, especially in hyperbole. I can’t see where free speech has been impinged at all.

browningstreet|4 days ago

Given Jeff Dean’s political activity on X, I’m guessing he’s aligned to the resistance too. Not sure the rest of management is interested in caving.

peyton|4 days ago

The resistance goes out the window the first time an American is gunned down by an autonomous system. They should do whatever possible to prevent that outcome.

Havoc|4 days ago

Very much doubt Google will take principled stance

OrvalWintermute|4 days ago

Google has been evil for at least a decade, if not longer than that.

This is just pigslop masquerading as a moral stand.

What happened to the OG Google that cared about users, prioritized honest search, fast performance, and didn't murder pages with ads?

mighmi|4 days ago

> evil

They never removed "don't be evil", they just changed where it is in the document.

dyauspitr|4 days ago

Honestly I want tech companies is to make our military strong, just not under this guy. He’s going to turn around and use it directly on Americans.

SilverElfin|4 days ago

They need to unionize quickly to protect their employment and include this as part of their bargaining

blobbers|4 days ago

Am I the only one who remembers the prime directive of google, much easier to understand than 'organizing the worlds information' etc. etc. It was simpler.

Don't be evil.

DanielHall|3 days ago

Another rather peculiar point: why did the Department of War reject Anthropic and label it a supply chain risk entity, despite its "prohibited content" policy being almost identical to OpenAI's stated policy in its announcement, yet award the contract to OpenAI?

sidibe|4 days ago

I remember they successfully got Google out of a military contract in the first admin (and briefly vilified by the right for that). that's not going to work now. Workers have a lot less power and the CEO is buddies with Trump

SpicyLemonZest|4 days ago

As the article says, the workers didn't petition the CEO, they petitioned the head of Google AI who's already expressed solidarity with Anthropic. If they can convince Jeff Dean, I don't think Sundar necessarily gets a say; it's a lot easier to stick your head in the sand and ignore things than to fire one of your most widely respected engineers because he won't help the Pentagon build Terminator robots.

ecshafer|4 days ago

This gets a giant eye roll from me. Are you really so naive that you thought working on AI for a giant tech company, creating software that is capable of finding deep patterns in massive amounts of data... and it wasn't going to used by the Defense / Intelligence industry? If you are so against the US government, and you are working for ANY big tech company you are aiding the Intelligence and Defense industry. Government uses AWS and Azure. Intelligence agencies use the data and tools of Meta / Google / Apple / etc.

Schmerika|4 days ago

You're not entirely wrong, but maybe you could consider supporting people who move in the right direction rather than rolling your eyes at them.

raw_anon_1111|4 days ago

Google employees must think this is pre 2024. The employer has the power and doesn’t mind laying off people who don’t tow the company line and all of the CEOs bend over and bribe the President - ie “settling” frivolous lawsuits brought by Trump himself over “censorship” when he was out of office

SpicyLemonZest|4 days ago

I think a lot of software companies are going to learn just how much employee power remains tomorrow, in the very likely event that the Pentagon issues an order purporting to ban all defense contractors from using Claude.

archagon|4 days ago

Time to unionize.

miohtama|4 days ago

"Don't do evil"

Oh, wait...

dotancohen|4 days ago

Defending one's own country is not evil, no matter how much money Qatar pours into Western social media influencers.

ecshafer|4 days ago

Aiding the your nation is not evil, in fact its the opposite, its Good.

ihsw|4 days ago

[deleted]

tokyobreakfast|4 days ago

"Google employees make demands of US Military from nap pods, ball pit."

I assume by red lines they are referring to a life-sized tic-tac-toe game board painted in a hallway.