top | item 47182869

(no title)

jstummbillig | 2 days ago

On the one hand it's fantastic that people are resisting and, if nothing else, raising awareness and buying time.

On the other hand, is autonomous war not obviously the endgame, given how quickly capabilities are increasing and that it simply does not require much intelligence (relatively speaking) to build something that points a gun at something and pulls a trigger?

It just needs one player to do it, so everyone has to be able to do it. I'd love to hear about different scenarios scenario.

discuss

order

renewiltord|2 days ago

That part isn’t actually clear. If China invents autonomous drones instead of us and they fuck it up they’ll kill their people.

Things like Scout AI’s Fury system are human in the loop still and I think for something that could just as well make a mistake and target your own troops it’s not yet clear that full auto is the way to go https://scoutco.ai/

Human in the loop okaying a full auto seems like it could work almost all the way. And then we count on geography. If they want to spray out a bunch of autonomous drones into our territory they do have to fly here to do it first or plant them prior in shipping containers. Better we aim at stopping that.

ACCount37|2 days ago

It's not that hard. DoD could find a contractor to do it. But Anthropic wants no part of it, and I get why.

jstummbillig|2 days ago

I absolutely do get it, but if you assume that eventually (and by that I mean: very, very soon) somebody else will do it, in how far is this line of action simply opting out of having some say in all of it and taking responsibility for situation that you instrumented?

And I am honestly not sure.

If your stance is "well, this is something that should just not happen" and also believe that is absolutely will happen, then what are you doing by saying "but it won't be us, it will instead be other people (who were enabled and inspired by our work in unsurprising ways)".

On the other hand, just the act of resisting could tip the scale in some incalculable and hopefully positive way.

fcarraldo|2 days ago

Yes - Anthropic _does_ incur business risk if their products are misused and this becomes a scandal. Legally the government may be in the clear to use the product, but that doesn’t mean Anthropic’s business is protected. Moral concerns aside, it’s their prerogative to decide not to take on a customer that may misuse their product in a way that might incur reputational harm.

Or it was their prerogative, until the Trump administration. Now even private companies must bend the knee.

stronglikedan|2 days ago

> It just needs one player to do it, so everyone has to be able to do it.

Businesses stay out of potentially profitable market segments for various reasons, so I don't think everyone has to be able to do it to survive.

dylan604|2 days ago

We are constantly told how the board has a fiduciary responsibility to make investors money to overrule these various reasons.

jstummbillig|2 days ago

Oh, I meant at state level. Business, yeah: the DoD (excuse me: Department of War) just needs one killer model.

Enginerrrd|2 days ago

> it simply does not require much intelligence (relatively speaking) to build something that points a gun at something and pulls a trigger?

I could not disagree more. A big part of that is also knowing when NOT to pull the trigger. And it’s much harder than you’d think. If you think full self driving is a difficult task for computers, battlefield operations are an order of magnitude more complex, at least.

ACCount37|2 days ago

We have fully autonomous weapons, and had them for over a century. We call them "landmines".

I expect autonomous weapons of the near future to look somewhat similar to that. They get deployed to an area, attack anything that looks remotely like a target there for a given time, then stand down and return to base. That's it.

The job of the autonomous weapon platform isn't telling friend from foe - it's disposing of every target within a geofence when ordered to do so.

thejohnconway|2 days ago

Yes, but it doesn’t have to be error-free. The friendly fire rates in symmetrical hot wars is pretty high, it’s considered a cost of going to war.

If autonomous weapons lead to a net battlefield advantage, I agree with the GP, they will be used. It is the endgame.

collingreen|2 days ago

The big asterisk in what you're saying is, like self driving cars, it's hardest when you want to be the most precise and limit the downsides. In this paradigm, false positives and false negatives have a very big cost.

If you simply wanted to cause havoc and destruction with no regard for collateral damage then the problem space is much more simple since you only need enough true positives to be effective at your mission.

The ability to code with ai has shown that it requires an even higher level of responsibility and discipline than before in order to get good results without out of control downside. I think the ability to kill with ai would be the same way but even more severe.

davidw|2 days ago

> A big part of that is also knowing when NOT to pull the trigger

"In a press conference, Musk promised that the Optimus Warbots would actually, definitely, for real, be fully autonomous in two years, in 2031. He also extended his condolences to the 56 service members killed during the training exercise"

0xffff2|2 days ago

And the US learned the lesson the hard way in Iraq that in fact even human intelligence struggles with this. There were major problems throughout the war with individual soldiers not adhering to the published rules of engagement.

gom_jabbar|2 days ago

> It just needs one player to do it, so everyone has to be able to do it. I'd love to hear a different scenario.

Other players just need to assume that one player might do it in the future. This virtual future scenario has a causal effect on the now. The overall dynamic is that of an arms race (which radically changes what a player is).