Years ago, scholars (such as Didier Bigo) have already raised concerns about the targeting of individuals merely based on (indirect) association with a "terrorist" or "criminal". Originally used in the context of surveillance (see Snowden revelations), such systems would target anyone who would be e.g. less than 3-steps away from an identified individual, thereby removing any sense of due process or targeted surveillance. Now, such AI systems are being used to actually kill people - instead of just surveil.
IHL actually prohibits the killing of persons who are not combatants or "fighters" of an armed group. Only those who have the "continuous function" to "directly participate in hostilities"[1] may be targeted for attack at any time. Everyone else is a civilian that can only be directly targeted when and for as long as they directly participate in hostilities, such as by taking up arms, planning military operations, laying down mines, etc.
That is, only members of the armed wing of Hamas (not recruiters, weapon manufacturers, propagandists, financiers, …) can be targeted for attack - all the others must be arrested and/or tried. Otherwise, the allowed list of targets of civilians gets so wide than in any regular war, pretty much any civilian could get targeted, such as the bank employee whose company has provided loans to the armed forces.
Lavender is so scary because it enables Israel's mass targeting of people who are protected against attack by international law, providing a flimsy (political but not legal) justification for their association with terrorists.
It always starts with making a list of targets that meet given criteria. Once you have the list its use changes from categorisation to demonisation -> surveillance -> denial of rights -> deportations -> killing. Early use of computers by Germans during WW2 included making and processing of lists of people who ought to be sent to concentration camps. The only difference today is that we are able to capture more data and process it faster at scale.
> Everyone else is a civilian that can only be directly targeted when and for as long as they directly participate in hostilities, such as by taking up arms, planning military operations, laying down mines, etc.
There is some incredible magic that often happens: as soon as anyone is targeted and killed, they immediately transform from civilians to "collaborators", "terrorists", "militants" etc. Of course everything is classified and restricted to avoid anyone snooping around and asking questions.
It's also interesting (and I guess typical for end-users of software) how quickly and easily something like this goes from "Here's a tool you can use as an information input when deciding who to target" to "I dunno, computer says these are the people we need to kill, let's get to it".
In the Guardian article, an IDF spokesperson says it exists and is only used as the former, and I'm sure that's what was intended and maybe even what the higher-ups think, but I suspect it's become the latter.
By the standards discussed in the article, anyone with a beef with Israel could justify targeting possible a majority of buildings in Israel. After all, most of the population is required to serve in the IDF.
> That is, only members of the armed wing of Hamas (not recruiters, weapon manufacturers,..
I think the loop-hole here is that a weapon manufacturing facility is almost certainly a military strategic target, and international law allows you to target the infrastructure provided the military advantage gained is porportional to the civilian death.
So you can't target the individuals but according to international law its fine to target the building they are in while the individuals are still inside provided its militarily worth it.
Practical AI did a podcast episode about the dangers of using AI models as a shield to hide behind in justifying your decisions. The episode was titled "Suspicion Machines" and based on the libked article [1], and I think it's worth a read/listen.
That is, only members of the armed wing of Hamas (not recruiters, weapon manufacturers, propagandists, financiers, …) can be targeted for attack
It seems wrong that you can't target weapon manufacturers, can you cite a source? Weapon manufacturers contribute to the military action, and destroying weapon manufacturers contributes to military advantage.
That's not exactly a prediction. It was was standard operating procedure for Warsaw Pact nations. They used human intel which was possibly even worse because it could manipulated out of malice.
> Only those who have the "continuous function" to "directly participate in hostilities"[1] may be targeted for attack at any time.
The problem with Hamas is that they don't shy away from hiding combattants in civilian clothings or use women and children as suicide bombers. There is more than enough evidence of this tactic, dating back many many years [1].
By not just not preventing, but actively ordering such war crimes, Hamas leadership has stripped its civilian population of the protections of international law.
> Otherwise, the allowed list of targets of civilians gets so wide than in any regular war, pretty much any civilian could get targeted, such as the bank employee whose company has provided loans to the armed forces.
In regular wars, it's uniformed soldiers against uniformed soldiers, away from civilian infrastructure (hospitals, schools, residential areas). The rules of war make deviating from that a war crime on its own, simply because it places the other party in the conflict of either having no chance to wage the war or to commit war crimes on their own.
> That is, only members of the armed wing of Hamas (not recruiters, weapon manufacturers, propagandists, financiers, …) can be targeted for attack - all the others must be arrested and/or tried.
In theory, yes. In practice--in which make believe world is this true?
Never thought I'd even consider this, but is this a case where those involved, producing and developing, this software should be tried for murder/crimes against humanity?
My understanding is that AI in it's current form is not an applicable technology to be anywhere near this type of use.
Again my understanding: Inference models by their very nature are largely non-deterministic, in terms of being able to evaluate accurately against specific desired outcomes. They need large scale training data available to provide even low levels of accuracy. That type of training data just isn't available, its all likely to be based on one big hallucination, is my take. I'd be surprised if this AI model was even 10% accurate. It wouldn't surprise me if it was less than 1% accurate. Not that accuracy appears to be critical from what I've read.
This specific application and the claimed rationale is as close as I have come to seeing what I consider true and deliberate "Evil application" of technology out in the open.
Depends on what "AI" means here. There is a spectrum of "we have a bunch of data in a database and some folks hand tuning queries" to "we built a deep learning network to predict XYZ." In the middle of that spectrum lie things like decision trees which provide explainable results.
It is not a naive take. Not by a long shot. Knowingly working on the development or upkeep of such a system, full well knowing it's limitations, and knowing of it's aftermath obliterates any level of clean hands in my eyes.
It’s amply clear from reporting that the IDF has no formal RoE on the ground - low level commanders have full autonomy to kill whomever, whenever, with zero oversight.
The “AI” exists to retcon the justification for any particular genocidal act, but this is really just an old school mindless slaughter driven by anger and racism.
> This specific application and the claimed rationale is as close as I have come to seeing what I consider true and deliberate "Evil application" of technology out in the open.
Someone will double down and include AI into the execution phase via AI controlled drones, tanks, etc. Then they will claim no responsibility and blame the ghost-in-the-shell.
As bad as this story makes the Israelis sound, it still reads like ass-covering to make it sound like they were at least trying to kill militants. It's been clear from the start that they've been targeting journalists, medical staff and anyone involved in aid distribution, with the goal of rendering life in Gaza impossible.
I'm disturbed by the idea that an AI could be used to make decisions that could proactively kill someone. (Presumably computer already make decisions that passively kill people by, for example, navigating a self-driving car.) Though there was a human sign-off in this case, it seems one step away from people being killed by robots with zero human intervention which is about one step away from the plot of Terminator.
I wonder what the alternative is in a case like this. I know very little about military strategy-- without the AI would Israel have been picking targets less, or more haphazardly? I think there may be some mis-reading of this article where people imagine that if Israel weren't using an AI they wouldn't drop any bombs at all, that's clearly unlikely given that there's a war on. Obviously people, including innocents, are killed in war, which is why we all loathe war and pray for the current one to end as quickly as possible.
I know many people won't read past the headline, but please try to.
This is the second paragraph:
"In addition to talking about their use of the AI system, called Lavender, the intelligence sources claim that Israeli military officials permitted large numbers of Palestinian civilians to be killed, particularly during the early weeks and months of the conflict."
I suggest everyone listen to the current season of the Serial podcast.
>processing masses of data to rapidly identify potential “junior” operatives to target. Four of the sources said that, at one stage early in the war, Lavender listed as many as 37,000 Palestinian men who had been linked by the AI system to Hamas or PIJ.
This is really no different than how the world was working in 2001 and choosing who to send to Gitmo and other more secretive prisons, or bombing their location
More than anything else it feels like just like in the corporate world, the engineers in the army are overselling the AI buzzword to do exactly what they were doing before it existed
If you use your paypal account to send money to an account identified as ISIS, you're going to get a visit from a 3 letter organization really quick. This sounds exactly like that from what the users are testifying to. Any decision to bomb or not bomb a location wasn't up to the AI, but to humans
> “We were not interested in killing [Hamas] operatives only when they were in a military building or engaged in a military activity,” A., an intelligence officer, told +972 and Local Call. “On the contrary, the IDF bombed them in homes without hesitation, as a first option. It’s much easier to bomb a family’s home. The system is built to look for them in these situations.”
Watching i24 news is a little unsettling. They run bits with interrogators announcing how productive torture has been, and make jokes about how it would be much easier if lemons just gave up their juice without being squeezed.
There are ~2M civilians who live in Gaza, and many of them don't have access to food, water, medicine, or safe shelter. Some of those unfortunates live above, or below, Hamas operatives and their families.
"Oh, sorry, lol." "It was unintentional, lmao, seriously." "Our doctrine states that we can kill X civilians for every hostile operative, so don't worry about it."
The war in Gaza is unlike Ukraine -- where Ukrainian and Russian villagers can move away from the front, either towards Russia or westwards into Galicia -- and where nobody's flattening major population centers. In Gaza, anybody can evidently be killed at any time, for any reason or for no reason at all. The Israeli "strategy" makes the Ukrainians and Russians look like paragons of restraint and civility.
Isn’t a military person a legitimate target at the time of the war? I think it is, the issue is the collateral damage. But then again this war shows that Hamas is also not following the rules and gets too close to civilians.
I wonder how accurate this technology really is or if they care so little for the results and instead more for the optics of being seen as advanced. On one hand, it’s scary to think this technology exists but on the other, it might just be a pile of junk since the output is so biased. What’s even scarier is that it’s proof that people in power don’t care about “correct”, they care about having a justification to confirm their biases. It’s always been the case but it’s even more damming this extends to AI. Previously, you were limited by how many humans can lie but now you’re limited by how fast your magic black box runs.
In 2018, Google CEO Sundar Pichai, SVP Diane Greene, SVP Urs Hölzle, and top engineer Jeff Dean built a system like Lavender for the US military (Project Maven). The US military planned to use it to analyze mass-surveillance drone footage to pick suspects in Pakistan for assassination. They had already dropped bombs on hundreds of houses and vehicles, murdering thousands of suspects and their families and friends [0].
I was working in Urs's Google Technical Infrastructure division. I read about the project in the news. Urs had a meeting about it where he lied to us, saying the contract was only $9M. It had already been expanded to $18M and was on track for $270M. He and Jeff Dean tried to downplay the impact of their work. Jeff Dean blinked constantly (lying?) while downplaying the impact. He suddenly stopped blinking when he began to talk about the technical aspects. I instantly lost all respect for him and the company's leadership.
Strong abilities in engineering and business often do not come with well-developed morals. Sadly, our society is not structured to ensure that leaders have necessary moral education, or remove them when they fail so completely at moral decisions.
And, personally, I think that stories like this are of public interest - while I won’t ask for it directly, I hope the flag is removed and the discussion can happen.
The difference between previously revealed 'Gospel' and this 'Lavender' is revealed here:
> "The Lavender machine joins another AI system, “The Gospel,” about which information was revealed in a previous investigation by +972 and Local Call in November 2023, as well as in the Israeli military’s own publications. A fundamental difference between the two systems is in the definition of the target: whereas The Gospel marks buildings and structures that the army claims militants operate from, Lavender marks people — and puts them on a kill list."
It's one thing to use these systems to mine data on human populations for who might be in the market for a new laptop, so they can be targeted with advertisements - it's quite different to target people with bombs and drones based on this technology.
Given the total failure to achieve any of its stated objectives, has this use of AI benefited the IDF at all?
I would argue that it's likely the only outcome it has had that directly relates to IDF objectives has probably been negative (i.e. the unintended killing of hostages).
Sadly, I think that the continued use of this AI is supported because it is helping to provide cover for individuals involved in war crimes. I wouldn't be surprised if the AI really weren't very sophisticated at all and that to serve the purpose of cover that doesn't matter.
Two sources said that during the early weeks of the war they were permitted to kill 15 or 20 civilians during airstrikes on low-ranking militants. Attacks on such targets were typically carried out using unguided munitions known as “dumb bombs”, the sources said, destroying entire homes and killing all their occupants.
"zero-error policy" as described here is a remarkable euphemism. You might hope that the policy is not to make any errors. In fact the policy is not to acknowledge that errors can occur!
Getting all these reports about atrocities, I wonder if the conflict in the area has grown more brutal over the decades or if this is just business as usual. I'm in my late 30s, growing up in the EU, the conflict in the region was always present. I don't remember hearing the kind of stories that come to light these days though, indiscriminate killings, food and water being targeted, aid workers being killed. I get that it's hard to know what's real and what's not and that we live in the age of information, but I'm curious how, on a high level, the conflict is developing. Does anyone got a good source that deals with that?
How far does the AI system go… is it behind the AI decision to starve the population of Gaza?
And if it is behind the strategy of starvation as a tool of war, is it also behind the decision to kill the aid workers who are trying to feed the starving?
How far does the AI system go?
Also, can an AI commit a war crime? Is it any defence to say, “The computer did it!” Or “I was just following AI’s orders!”
There’s so much about this death machine AI I would like to know.
The use of AI and the authorisation to kill civilians are unrelated parts of this story. Nowhere does it mention that the AI is being used to justify killing of civilians.
[+] [-] Quanttek|1 year ago|reply
IHL actually prohibits the killing of persons who are not combatants or "fighters" of an armed group. Only those who have the "continuous function" to "directly participate in hostilities"[1] may be targeted for attack at any time. Everyone else is a civilian that can only be directly targeted when and for as long as they directly participate in hostilities, such as by taking up arms, planning military operations, laying down mines, etc.
That is, only members of the armed wing of Hamas (not recruiters, weapon manufacturers, propagandists, financiers, …) can be targeted for attack - all the others must be arrested and/or tried. Otherwise, the allowed list of targets of civilians gets so wide than in any regular war, pretty much any civilian could get targeted, such as the bank employee whose company has provided loans to the armed forces.
Lavender is so scary because it enables Israel's mass targeting of people who are protected against attack by international law, providing a flimsy (political but not legal) justification for their association with terrorists.
[1]: https://www.icrc.org/en/doc/assets/files/other/icrc-002-0990...
[+] [-] surfingdino|1 year ago|reply
[+] [-] rdtsc|1 year ago|reply
There is some incredible magic that often happens: as soon as anyone is targeted and killed, they immediately transform from civilians to "collaborators", "terrorists", "militants" etc. Of course everything is classified and restricted to avoid anyone snooping around and asking questions.
[+] [-] CommieBobDole|1 year ago|reply
In the Guardian article, an IDF spokesperson says it exists and is only used as the former, and I'm sure that's what was intended and maybe even what the higher-ups think, but I suspect it's become the latter.
[+] [-] throwaway7351|1 year ago|reply
[+] [-] bawolff|1 year ago|reply
I think the loop-hole here is that a weapon manufacturing facility is almost certainly a military strategic target, and international law allows you to target the infrastructure provided the military advantage gained is porportional to the civilian death.
So you can't target the individuals but according to international law its fine to target the building they are in while the individuals are still inside provided its militarily worth it.
[+] [-] firejake308|1 year ago|reply
[1]: https://www.wired.com/story/welfare-state-algorithms/
[+] [-] shmatt|1 year ago|reply
[+] [-] jibe|1 year ago|reply
It seems wrong that you can't target weapon manufacturers, can you cite a source? Weapon manufacturers contribute to the military action, and destroying weapon manufacturers contributes to military advantage.
[+] [-] BLKNSLVR|1 year ago|reply
https://www.abc.net.au/news/2024-04-03/world-central-kitchen...
https://www.abc.net.au/news/2024-04-02/israeli-strike-that-k...
Pretty disgraceful (which itself feels a disgracefully unimpactful thing to say regarding people losing their lives whilst doing charity work).
[+] [-] tootie|1 year ago|reply
[+] [-] mschuster91|1 year ago|reply
The problem with Hamas is that they don't shy away from hiding combattants in civilian clothings or use women and children as suicide bombers. There is more than enough evidence of this tactic, dating back many many years [1].
By not just not preventing, but actively ordering such war crimes, Hamas leadership has stripped its civilian population of the protections of international law.
> Otherwise, the allowed list of targets of civilians gets so wide than in any regular war, pretty much any civilian could get targeted, such as the bank employee whose company has provided loans to the armed forces.
In regular wars, it's uniformed soldiers against uniformed soldiers, away from civilian infrastructure (hospitals, schools, residential areas). The rules of war make deviating from that a war crime on its own, simply because it places the other party in the conflict of either having no chance to wage the war or to commit war crimes on their own.
[1] https://en.wikipedia.org/wiki/Use_of_child_suicide_bombers_b...
[+] [-] sgjohnson|1 year ago|reply
In theory, yes. In practice--in which make believe world is this true?
[+] [-] NomDePlum|1 year ago|reply
My understanding is that AI in it's current form is not an applicable technology to be anywhere near this type of use.
Again my understanding: Inference models by their very nature are largely non-deterministic, in terms of being able to evaluate accurately against specific desired outcomes. They need large scale training data available to provide even low levels of accuracy. That type of training data just isn't available, its all likely to be based on one big hallucination, is my take. I'd be surprised if this AI model was even 10% accurate. It wouldn't surprise me if it was less than 1% accurate. Not that accuracy appears to be critical from what I've read.
The Guardian article: https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai..., makes me wonder whether AI development should be allowed at all. Didn't even have that thought before today.
This specific application and the claimed rationale is as close as I have come to seeing what I consider true and deliberate "Evil application" of technology out in the open.
Is this a naive take?
[+] [-] thisiszilff|1 year ago|reply
[+] [-] salawat|1 year ago|reply
[+] [-] bevekspldnw|1 year ago|reply
The “AI” exists to retcon the justification for any particular genocidal act, but this is really just an old school mindless slaughter driven by anger and racism.
[+] [-] KETpXDDzR|1 year ago|reply
[+] [-] thewileyone|1 year ago|reply
Someone will double down and include AI into the execution phase via AI controlled drones, tanks, etc. Then they will claim no responsibility and blame the ghost-in-the-shell.
[+] [-] rowanseymour|1 year ago|reply
[+] [-] sequoia|1 year ago|reply
I wonder what the alternative is in a case like this. I know very little about military strategy-- without the AI would Israel have been picking targets less, or more haphazardly? I think there may be some mis-reading of this article where people imagine that if Israel weren't using an AI they wouldn't drop any bombs at all, that's clearly unlikely given that there's a war on. Obviously people, including innocents, are killed in war, which is why we all loathe war and pray for the current one to end as quickly as possible.
[+] [-] smt88|1 year ago|reply
This is the second paragraph:
"In addition to talking about their use of the AI system, called Lavender, the intelligence sources claim that Israeli military officials permitted large numbers of Palestinian civilians to be killed, particularly during the early weeks and months of the conflict."
[+] [-] shmatt|1 year ago|reply
>processing masses of data to rapidly identify potential “junior” operatives to target. Four of the sources said that, at one stage early in the war, Lavender listed as many as 37,000 Palestinian men who had been linked by the AI system to Hamas or PIJ.
This is really no different than how the world was working in 2001 and choosing who to send to Gitmo and other more secretive prisons, or bombing their location
More than anything else it feels like just like in the corporate world, the engineers in the army are overselling the AI buzzword to do exactly what they were doing before it existed
If you use your paypal account to send money to an account identified as ISIS, you're going to get a visit from a 3 letter organization really quick. This sounds exactly like that from what the users are testifying to. Any decision to bomb or not bomb a location wasn't up to the AI, but to humans
[+] [-] arminiusreturns|1 year ago|reply
[+] [-] tiahura|1 year ago|reply
[+] [-] A_D_E_P_T|1 year ago|reply
Okay, how is this not a war crime?
There are ~2M civilians who live in Gaza, and many of them don't have access to food, water, medicine, or safe shelter. Some of those unfortunates live above, or below, Hamas operatives and their families.
"Oh, sorry, lol." "It was unintentional, lmao, seriously." "Our doctrine states that we can kill X civilians for every hostile operative, so don't worry about it."
The war in Gaza is unlike Ukraine -- where Ukrainian and Russian villagers can move away from the front, either towards Russia or westwards into Galicia -- and where nobody's flattening major population centers. In Gaza, anybody can evidently be killed at any time, for any reason or for no reason at all. The Israeli "strategy" makes the Ukrainians and Russians look like paragons of restraint and civility.
[+] [-] sublimefire|1 year ago|reply
[+] [-] randysalami|1 year ago|reply
[+] [-] mleonhard|1 year ago|reply
I was working in Urs's Google Technical Infrastructure division. I read about the project in the news. Urs had a meeting about it where he lied to us, saying the contract was only $9M. It had already been expanded to $18M and was on track for $270M. He and Jeff Dean tried to downplay the impact of their work. Jeff Dean blinked constantly (lying?) while downplaying the impact. He suddenly stopped blinking when he began to talk about the technical aspects. I instantly lost all respect for him and the company's leadership.
Strong abilities in engineering and business often do not come with well-developed morals. Sadly, our society is not structured to ensure that leaders have necessary moral education, or remove them when they fail so completely at moral decisions.
[0] https://en.wikipedia.org/wiki/Drone_strikes_in_Pakistan
[+] [-] skilled|1 year ago|reply
https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai...
And, personally, I think that stories like this are of public interest - while I won’t ask for it directly, I hope the flag is removed and the discussion can happen.
[+] [-] JeremyNT|1 year ago|reply
I would hope they can be unflagged and merged, this appears to be an important story about a novel use of technology.
[0] https://news.ycombinator.com/item?id=39917727
[+] [-] tguvot|1 year ago|reply
[+] [-] tsujamin|1 year ago|reply
[+] [-] unknown|1 year ago|reply
[deleted]
[+] [-] unknown|1 year ago|reply
[deleted]
[+] [-] photochemsyn|1 year ago|reply
> "The Lavender machine joins another AI system, “The Gospel,” about which information was revealed in a previous investigation by +972 and Local Call in November 2023, as well as in the Israeli military’s own publications. A fundamental difference between the two systems is in the definition of the target: whereas The Gospel marks buildings and structures that the army claims militants operate from, Lavender marks people — and puts them on a kill list."
It's one thing to use these systems to mine data on human populations for who might be in the market for a new laptop, so they can be targeted with advertisements - it's quite different to target people with bombs and drones based on this technology.
[+] [-] tmnvix|1 year ago|reply
I would argue that it's likely the only outcome it has had that directly relates to IDF objectives has probably been negative (i.e. the unintended killing of hostages).
Sadly, I think that the continued use of this AI is supported because it is helping to provide cover for individuals involved in war crimes. I wouldn't be surprised if the AI really weren't very sophisticated at all and that to serve the purpose of cover that doesn't matter.
[+] [-] dw_arthur|1 year ago|reply
The world should not forget this.
[+] [-] wantlotsofcurry|1 year ago|reply
[+] [-] me_again|1 year ago|reply
[+] [-] hunglee2|1 year ago|reply
[+] [-] leke|1 year ago|reply
[+] [-] barbazoo|1 year ago|reply
[+] [-] supposemaybe|1 year ago|reply
How far does the AI system go… is it behind the AI decision to starve the population of Gaza?
And if it is behind the strategy of starvation as a tool of war, is it also behind the decision to kill the aid workers who are trying to feed the starving?
How far does the AI system go?
Also, can an AI commit a war crime? Is it any defence to say, “The computer did it!” Or “I was just following AI’s orders!”
There’s so much about this death machine AI I would like to know.
[+] [-] malfist|1 year ago|reply
[+] [-] basil-rash|1 year ago|reply
[+] [-] spuz|1 year ago|reply
[+] [-] realPubkey|1 year ago|reply
[deleted]
[+] [-] hugodan|1 year ago|reply
[+] [-] XorNot|1 year ago|reply