top | item 39923765

(no title)

koutetsu | 1 year ago

As someone working in the AI field, I find this use of AI truly terrifying. Today it may be used to target Hamas and accept a relatively large number of civilian deaths as permissible collateral damage, but nothing guarantees that it won't be exported and used somewhere else. On top of that, I don't think anything is done to alleviate biases in the data (if you're used to target people from a certain group then your AI system will still target people from that group) or validate the predictions after a "target" is bombed. I wish there was more regulations for these use cases. Too bad the EU AI Act doesn't address military uses at all.

discuss

order

beepbooptheory|1 year ago

I think anyone that works in the AI field is going to really need to have their head on straight to even be able to just emotionally deal with things like this and who knows what else to come.

I can't even imagine what it would be like to just like the idea of AI, study, get a job writing some Python, then one day wake up and learn you have quite a lot of blood (indirectly) on your hands.

Like either you need to become the kind of person that doesn't care, or one that learns to live with a lot of ambient guilt hanging around. Not sure which is worse.

Honestly feel so much for the ten thousand bright eyed, intelligent nerds eager for technology and the future. I know they will be compensated well, but that won't ever balance out what will happen to their minds one way or another.

But this is an old story at this point I guess.

onethought|1 year ago

Given we don’t know what it’s using to identify people we don’t really know any biases. “Holding a military weapon” probably doesn’t contain a whole lot of bias (of course there is misidentification).

koutetsu|1 year ago

Let me quote from the article:

> Lavender learns to identify characteristics of known Hamas and PIJ operatives, whose information was fed to the machine as training data, and then to locate these same characteristics — also called “features” — among the general population, the sources explained. An individual found to have several different incriminating features will reach a high rating, and thus automatically becomes a potential target for assassination.

It literally says that they use data from known Hamas members (we don't know what this data contains) as training data which is a recipe for making biased predictions. Hamas members represent a minority in Gaza (the total population is over 2 million people) and thus the real data is heavily imbalanced[0] and unless addressed leads to bad models.

On top of that, if you know anything about Machine Learning then you should be aware of models finding spurious correlations[1] in the data that make its predictions accurate on the available training and validation data and not so much once deployed and used with real data.

[0] https://developers.google.com/machine-learning/data-prep/con...

[1] https://thegradient.pub/shortcuts-neural-networks-love-to-ch...