(no title)
supposemaybe | 1 year ago
How far does the AI system go… is it behind the AI decision to starve the population of Gaza?
And if it is behind the strategy of starvation as a tool of war, is it also behind the decision to kill the aid workers who are trying to feed the starving?
How far does the AI system go?
Also, can an AI commit a war crime? Is it any defence to say, “The computer did it!” Or “I was just following AI’s orders!”
There’s so much about this death machine AI I would like to know.
diggan|1 year ago
No, the point of this program seems to be to find targets for assassination, removing the human bottleneck. I don't think bigger strategic decisions like starving the population of Gaza was bottlenecked in the same way as finding/deciding on bombing targets is.
> is it also behind the decision to kill the aid workers who are trying to feed the starving?
It would seem like this program gives whoever is responsible for the actual bombing a list of targets to chose from, so supposedly a human was behind that decision but aided by a computer. Then it turns out (according to the article at least) that the responsible parties mostly rubberstamped those lists without further verification.
> can an AI commit a war crime?
No, war crimes are about making individuals responsible for their choices, not about making programs responsible for their output. At least currently.
The users/makers of the AI surely could be held in violation of laws of war though, depending on what they are doing/did.
dfxm12|1 year ago
There is also another AI system that tracks when these target get home.
Additional automated systems, including one called “Where’s Daddy?” also revealed here for the first time, were used specifically to track the targeted individuals and carry out bombings when they had entered their family’s residences.
I think "assassination" colloquially means to pinpoint and kill one individual target. I don't mean to say you are implying this, but I do want to make it clear to other readers that according to the article, they are going for max collateral damage, in terms of human life and infrastructure.
“The only question was, is it possible to attack the building in terms of collateral damage? Because we usually carried out the attacks with dumb bombs, and that meant literally destroying the whole house on top of its occupants. But even if an attack is averted, you don’t care — you immediately move on to the next target. Because of the system, the targets never end. You have another 36,000 waiting.”
barbazoo|1 year ago
It's not that the "AI" described here is an autonomous actor.
> During the early stages of the war, the army gave sweeping approval for officers to adopt Lavender’s kill lists, with no requirement to thoroughly check why the machine made those choices or to examine the raw intelligence data on which they were based. One source stated that human personnel often served only as a “rubber stamp” for the machine’s decisions, adding that, normally, they would personally devote only about “20 seconds” to each target before authorizing a bombing
Obviously all this is to be taken with a grain of salt, who knows if it's even true.
thomastjeffery|1 year ago
"An AI" doesn't exist. What is being labeled "AI" here is a statistical model. A model can't do anything; it can only be used to sift data.
No matter where in the chain of actions you put a model, you can't offset human responsibility to that model. If you try, reasonable people will (hopefully) call you out on your bullshit.
> There’s so much about this death machine AI I would like to know.
The death machine here is Israel's military. That's a group of people who don't get to hide behind the facade of "an AI told me". It's a group of people who need to be held responsible for naively using a statistical model to choose who they murder next.