top | item 47182936

(no title)

Enginerrrd | 2 days ago

> it simply does not require much intelligence (relatively speaking) to build something that points a gun at something and pulls a trigger?

I could not disagree more. A big part of that is also knowing when NOT to pull the trigger. And it’s much harder than you’d think. If you think full self driving is a difficult task for computers, battlefield operations are an order of magnitude more complex, at least.

discuss

order

ACCount37|2 days ago

We have fully autonomous weapons, and had them for over a century. We call them "landmines".

I expect autonomous weapons of the near future to look somewhat similar to that. They get deployed to an area, attack anything that looks remotely like a target there for a given time, then stand down and return to base. That's it.

The job of the autonomous weapon platform isn't telling friend from foe - it's disposing of every target within a geofence when ordered to do so.

fweimer|2 days ago

And the arms industry has been pushing smart mines for decades, so that they can keep selling them despite the really bad long-term consequences (well beyond the end of hostilities) and the Ottawa Treaty ban. In the end, land mines are killing people although the mines are supposed to be sufficiently advanced not to target persons.

From a security perspective, the “return to base” part seems rather problematic. I doubt you'd want to these things to be concentrated in a single place. And I expect that the long-term problems will be rather similar to mines, even if the electronics are non-operational after a while.

golem14|2 days ago

Well, I assume that they are at least not to attack their autonomous "comrades". Masquerading as such will be one obvious tactic, no ? You could argue that these guys would use e2e encrypted messages as FOF designation, but I would imagine a contested area would be blanketed with jammers, leaving only other options (light ? but smokescreens. Audio? Also easily jammed). So this isn't as easy as most people think.

Edit: No, I don't think a purely defensive stance like landmines is sufficient and what the people in command think.

We have landmines today. Why spend much more making marginally better, highly intelligent ones with LLMs?

snowwrestler|2 days ago

You don’t need Anthropic for this use case, so obviously this use case is not what the current fight is about.

mothballed|2 days ago

I guess by that definition, a bullet is also autonomous. It will strike anything in its path of flight, autonomously without further direction from the operator.

strangattractor|2 days ago

"Since the end of the Vietnam War in 1975, unexploded ordnance (UXO)—including landmines, cluster bombs, and artillery shells—has killed over 40,000 people and injured or maimed more than 60,000 others." - Google AI Overview "How many children were maimed by landmines after the vietnam war"

thejohnconway|2 days ago

Yes, but it doesn’t have to be error-free. The friendly fire rates in symmetrical hot wars is pretty high, it’s considered a cost of going to war.

If autonomous weapons lead to a net battlefield advantage, I agree with the GP, they will be used. It is the endgame.

collingreen|2 days ago

The big asterisk in what you're saying is, like self driving cars, it's hardest when you want to be the most precise and limit the downsides. In this paradigm, false positives and false negatives have a very big cost.

If you simply wanted to cause havoc and destruction with no regard for collateral damage then the problem space is much more simple since you only need enough true positives to be effective at your mission.

The ability to code with ai has shown that it requires an even higher level of responsibility and discipline than before in order to get good results without out of control downside. I think the ability to kill with ai would be the same way but even more severe.

davidw|2 days ago

> A big part of that is also knowing when NOT to pull the trigger

"In a press conference, Musk promised that the Optimus Warbots would actually, definitely, for real, be fully autonomous in two years, in 2031. He also extended his condolences to the 56 service members killed during the training exercise"

ben_w|2 days ago

I've not watched all of Robocop (too much gore for me), but I have seen the boardroom introduction of the ED-209.

That's how I imagine a Musk demo of this kind of thing would play out, if his team can't successfully manage upwards.

0xffff2|2 days ago

And the US learned the lesson the hard way in Iraq that in fact even human intelligence struggles with this. There were major problems throughout the war with individual soldiers not adhering to the published rules of engagement.

NoGravitas|2 days ago

Yes, but the important bit is that autonomous drones can't be held accountable for not adhering to the published rules of engagement.