> Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”
It just created a situation in which a bunch of people with guns were told that some teen had a gun. That's a very unsafe situation that the system created, out of nothing.
And some teen may be traumatized. Again, unsafe.
Incidentally, the article's quotes make this teen sound more adult than anyone who sold or purchased this technology product.
Another false positive by one of these leading content filters schools use - the kid said something stupid in a group chat and an AI reported it to the school, and the school contacted the police. The kid was arrested, stripped searched, and held for 24 hours without access to their parents or counsel. They ultimately had to spend time in probation, a full mental health evaluation, and go to an alternative school for a period of time. They are suing Gaggle, who claims they never intended their system to be used that way.
These kinds of false positives are incredibly common. I interviewed at one of their competitors (Lightspeed), and they actually provide a paid service where they have humans review all the alerts before being forwarded to the school or authorities. This is a paid addon, though.
Engineer: hey I made this cool thing that can help people in public safety roles process information and make decisions more efficiently! It gives false positives but you save more time than it takes less time to weed through them.
Someone nearby: well what if they use it to replace human thinking instead of augment it?
Engineer: well they would be ridiculous. Nobody would ever think that’s a good idea.
Marketing Team: it seems like this lands best when positioning it as a decision-making tool. Let’s get some metrics on how much faster it is at making decisions than people are.
Sales Rep: ok, Captain, let’s dive into our flagship product, DecisionMaker Pro, the totally automated security monitoring agent…
::6 months later—some kid is being held at gunpoint over snacks.::
In any system, there are false positives and false negatives. In some situations (like a high recall disease detection) false negatives are much worse than false positives, because the cost of a false positive is a more rigorous screening.
But in this case both are bad. If it was a false negative students might need therapy for a more tragic reason.
Aside from improving the quality of the detection model, we should try to reduce the “cost” of both failure modes as much as possible. Putting a human in the loop or having secondary checks are ways to do that.
The danger is that it's as clear as day that in the future someone is gonna be killed. That's not just a possibility. It's a certainty given the way the system is set up.
This tech is not supposed to be used in this fashion. It's not ready.
"“We understand how upsetting this was for the individual that was searched as well as the other students who witnessed the incident,” the principal wrote. “Our counselors will provide direct support to the students who were involved.”"
Make them pay money for false positives instead of direct support and counselling. This technology is not ready for production, it should be in a lab not in public buildings such as schools.
> Make them pay money for false positives instead of direct support and counselling.
Agreed.
> This technology is not ready for production
No one wants stuff like this to happen, but nearly all technologies have risks. I don't consider that a single false positive outweighs all of its benefits; it would depend on the rates of false and true positives, and the (subjective) value of each (both high in this case, though I'd say preventing 1 shooting is unequivocally of more value than preventing 1 innocent person being unnecessarily held at gunpoint).
Stuff like this feels like some company has managed to monetize an open source object detection model like YOLO [1], creating something that could be cobbled together relatively easily, and then sold it as advance AI capabilities. (You'd hope they've have at least fine-tuned it / have a good training dataset.)
We've got a model out there now that we've just seen has put someone's life at risk... Does anyone apart from that company actually know how accurate it is? What it's been trained on? Its false positive rate? If we are going to start rolling out stuff like this, should it not be mandatory for stats / figures to be published? For us to know more about the model, and what it was trained on?
I expect that John Bryan -- who produces content as The Civil Rights Lawyer https://thecivilrightslawyer.com -- will have something to say about it.
He frequently criticizes the legality of police holding people at gunpoint based on flimsy information, because the law considers it a use of force, which needs to be justified. For instance, he's done several videos about "high risk" vehicle stops, where a car is mistakenly flagged as stolen, and the occupants are forced to the ground at gunpoint.
My take is that if both the AI automated system and a "human in the loop" police officer looked at the picture and reasonably believed that the object in the image was a gun, then the stop might have been justified.
But if the automated system just sent the officers out without having them review the image beforehand, that's much less reasonable justification.
The article says the police later showed the student the photo that triggered the alert. He had a crumpled-up Doritos bag in his pocket. So there was no gun in the photo, just a pocket bulge that the AI thought was a gun... which sounds like a hallucination, not any actual reasonable pattern-matching going on.
But the fact that the police showed the photo does suggest that maybe they did manually review the photo before going out. If that's the case, do wonder how much the AI influenced their own judgment, though. That is, if there was no AI involved, and police were just looking at real-time surveillance footage, would they have made the same call on their own? Possibly not: it feels reasonable to assume that they let the fact of the AI flagging it override their own judgment to some degree.
Ah, the coming age of Palantir's all seeing platform; and Peter Thiel becoming the shadow Emperor. Too bad non-deterministic ml systems are prone to errors that risk lives when applied wrongly to crucial parts of society. But in an authoritarian state, those will be hidden away anyway, so there's nothing to see here: move along folks. Yes, surveillance and authoritarianism go hand in hand, ask China. It's important to protest these methods and push lawmakers to act against them; now, before it's too late.
Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”
prioritize your own safety by not attending any location fitted with such a system, or deemed to be such a dangerous environment that such a system is desired.
Calling it today. This company is going to get innocent kids killed.
How many packs of Twizzlers, or Doritos, or Snickers bars are out there in our schools?
First time it happens, there will be an explosion of protests. Especially now that the public knows that the system isn't working but the authorities kept using it anyway.
This is a really bad idea right now. The technology is just not there yet.
Why did they waste time verifying? The police should have eliminated the threat before any harm could be done. Seconds count when you're keeping people safe.
The dispatch relayer and responding officers should at least have ready access to a screen where they can see a video/image of the raw footage that triggered the AI alert. If it is a false alarm, they will better see it and react accordingly, and if it is a real threat they will better understand the initial context and who may have been involved.
Walking through TSA scanners, I always get that unnerving feeling I will get pulled aside. 50% of the time they flag my cargo pants because of the zipper pockets - There is nothing in them but the scanner doesn't like them.
Now we get the privilege of walking by AI security cameras placed in random locations, hoping they don't flag us.
There's a ton of money to be made with this kind of global frisking, so lots of pressure to roll out more and more systems.
To be fair, at least you can choose not to wear the cargo pants.
A friend of mine once got pulled aside for extra checks and questioning after he had already gone through the scanners, because he was waiting for me on the other side to walk to the gates together and the agent didn't like that he was "loitering" – guess his ethnicity...
Email the state congressman and tell them what you think.
Since (pretty much) nobody does this, if a few hundred people do it, they will sit up and take notice. It takes less people than you might think.
Since coordinating this with a bunch of strangers (I.e. the public) is difficult, the most effective way is to normalise speaking up in our culture. Of course normalising it will increase the incoming comm rate, which will slowly decrease the effectiveness but even post that state, it’s better than where we are, which is silent public apathy
Get Precheck or global entry. I only do a scanner every 5 years or so when I get pulled at random for it. Otherwise it's metal detector only. Unless your zippers have such chunky metal that they set that off you'll be fine. My belt and watch don't.
Note: Precheck is incredibly quick and easy to get; and GE is time consuming and annoying, but has its benefits if you travel internationallly. Both give the same benefits at TSA.
Second note: let's pretend someone replied "I shouldn't have to do that just to be treated...blah blah" and that I replied, "maybe not, but a few bucks could still solve this problem, if it bothers you enough that's worth it to you."
I don't often fly, but back when I went to germany on a school trip, on the return flight I got pulled aside into a small room by whatever the german equivalent of TSA is and they swabbed the skin of my belly, and the inside of my bag. I'm guessing it was a drugs check and I must have just looked shifty because I get nervous in situations like that, but I do find it funny that they pulled me aside instead of the guys with me who almost certainly had something on them.
Also my partner has told me that apparently my armpits sometimes smell of weed or beer, despite me not coming in contact with either of those for a very long time, and now I definitely don't want to get taken into a small room by a TSA person (After some googling, apparently those smells can be associated with high stress)
I already adjust my clothing choices when flying to account for TSA's security theater make-work bullshit. Wonder how long before I'm doing that when preparing to go to other public places.
(I suppose if I attended pro sports games or large concerts, I'd be doing it for those, too)
Getting pulled aside by TSA for secondary screening is nowhere in the ball park of being rushed at gunpoint as a teenager and told to lay down on the ground where one false move will get you shot by a trigger happy cop that probably won’t face any consequences - especially if the innocent victim is a Black male.
In fact, they will probably demonize the victim to find sn excuse why he deserved to get shot.
I got pulled aside because I absentmindedly showed them my concealed carry permit, not my driver's license. I told them I was a consultant working for their local government and was going back to Austin. No harm no foul.
This may be mean, but we should really be careful about just handing AI over to technically illiterate people. They're far more likely to blind trust the LLM/AI output than someone who may be more experienced and take a beat. AI in an agentic-state society (what we have in America at least) is an absolute ticking time bomb. Honestly, this is what AI safety teams should be concentrated on: making sure people who think the computer is infallible understand that, no, it isn't, and you shouldn't just assume what it tells you is correct.
It's basically a failure of setting up the proper response playbook.
Instead of:
1. AI detects gun on surveillance
2. Dispatch armed police to location
It should be:
1. AI detects gun on surveillance
2. Human reviews the pictures and verifies the threat
3. Dispatch armed police to location
I think the latter version is likely what already took place in this incident, and it was actually a human that also mistook a bag of Doritos for a gun.
But that version of the story is not as interesting, I guess.
He could have been easily murdered. It's not the first time by a far margin that a bunch of overzealous cops murder a kid. I would never ever in my life set foot in a place that sends me armed cops so easily. That school is extremely dangerous.
I think the reason the school bought this silly software is because it's a dangerous school, and they're grasping at straws to try and fix the problem. The day after this false positive, a student was robbed.[1] Last month, a softball coach was charged with rape and possession of child pornography.[2] Last summer, one student was stabbed while getting off the bus.[3] Last year, there were two incidents where classmates stabbed each other.[4][5]
If false positives are known to happen, then you design a system where the image is vetted before telling the cops the perpetrator is armed. The company is basically swatting, but I'm sure they'll never be held liable.
Actually, if a system has too many false positives or false negatives, it's basically useless. There will eventually be doubts amongst the operators of it and the whole thing will implode, which is the best possible outcome.
We already went through this years ago with all those terrorism databases and we (humanity) have learned nothing--any database will have a percentage of erroneous data, it is impossible to eliminate erroneous data completely. Therefore, any database used to identify <fill in the blank> will have erroneous conclusions. It's been observed over and over again and governments can't help themselves that "this time it will be different because <fill in the blank> e.g. AI.
That was my first thought as well. A worry is police officers make mistakes which leads to anywhere from hapless people getting terrorized, harmed or killed. The bad thing about AI is it'll allow police to escape responsibility. Perhaps also where a human will realize it made a mistake they can admit it and everything is okay. But if AI says you had a gun, it won't walk that back. AI said he had a gun. But when we checked, he didn't have it anymore.
In the Menezes case the cops were playing a game of telephone that ended up with him being shot in the head.
Not sure nothing will happen. Some trial lawyers would love to sue a city, a school system, and an AI surveillance company over "trauma, anxiety, and mental pain and suffering" caused by the incident. There will probably be a settlement that nobody ever hears about.
Law enforcement officers, judicial officials, social workers, and similar generally maintain qualified immunity from liability in the course of their work. This case for example in which judges and social workers allegedly failed to properly assess a mother's fitness for child custody despite repeated indicators suggesting otherwise. The child was ultimately placed in the mother's care, and later was killed in an execution style (not due to negligence).
The world is doing fairly ok, thank you. The US however I’m not so sure as people here are apparently more concerned by the AI malfunction than with the idea it’s somehow sensible to live monitor high schools for gun threat.
A bunch of companies and people invested unimaginable amounts of money in these technologies in the hope they will multiply that money. They will showe it down our throats no matter what, this isn't about security and making the world a better place, saving lives or preventing bad things to happen, this is strictly about those people and companies making as much money as possible, or at least for now not losing the money they invested.
The school admin has no understanding of the tech and only the dimmest comprehension of what happened. Asking them to do anything besides what the tech company told them to do is asking wayyy too much.
We blame AI here but what's up with law enforcment that comes with loaded guns in hand and send someone to the ground and cuff him before actually doing any check?
That is the real issue.
Police force anywhere else in the world that know how to behave would have approched the student, have had a small chat with him, found out all he had in hands was a bag of doritos, maybe would have asked politely to see the content of his bag, explaining the search has been triggered by an autodetection system that may lead to occasional errors and wished him a good day.
The guidance counselor does not have the training or time to "fix" the trauma you just gave this kid and his friends. Insane to put minors through this.
I wonder if the AI correctly identified it as a bag of Doritos, but was also trained on the commercial[0] where the bag appears to beat up a human (his fault for holding on too tight) and then it destroys an alien spacecraft.
An alert by one of these AI tools, which from what I understand have a terrible track record, should not be reasonable suspicion or probable cause to swarm a teenager with guns drawn. I wish more people in local communities would understand how much harm this type of surveillance and response causes. Our communities should not be using these tools.
Companies, particularly in the US, very much want to go with (2) and part of the reason they can is because there are zero consequences for incidents like this.
A couple ofexamples spring to mind:
1. the UK Royal Mail scandal where a bad system accused postmasters of theft, some of whom committed suicide over the allegations. Those allegations were later proven false and it was the system's fault. IMHO the people who signed off and deployed this should be charged with negligent homicide; and
2. The Hertz case where people who had returned cars were erroneously flagged as car thieves and report was made to police. This created hell for people who would often end up with warrants they had no idea about and would be detained on random traffic stops over a car that was never stolen.
Now these aren't AI but just like the Doritos case here, the principle is the same: companies are trying to replace people with computers. In all cases, a human should be responsible for reviewing any such complaint. In the Hertz case, a human should check to see if the car is actually stolen.
In the Royal Mail situation, the system needs to show its work. Deployment should be against the existing accounting system and discrepancies between the two need to be investigated for bugs until the system is proven correct. Particularly in the early stages, a forensic accountant (if necessary) should verify that funds were actually stolen before filing a criminal complaint.
And if "false positive" criminal complaints are filed, the people who allowed that to happen, if negligent (and we all know they are), should themslves be criminally charged.
We are way too tolerant of black box systems that can result in significant harm or even death to people. Show your work. And make a human put their name and reputation to any output of such systems.
The core of the issue is that many Americans do carry weapons which means that whatever the security system, it needs to keep in mind that the suspect might be armed and about to start shooting. This makes the police biased towards escalation because the only way against a shooter is to shoot first.
This problem doesn't exist in Europe or Japan because guns aren't that ubiquitous, which means that the police have the time to think before they act, which makes them less likely to escalate and start shooting. Obviously, for Americans, the only solution is to get rid of the gun culture, but this will never happen, so suck it up that AI gets you swatted.
I don't have kids yet, but I may someday. I went to public school myself, and would prefer to send any kid of mine to public school as well. (I'm not hard against private schools, but I'd prefer my kid gets to make friends from all walks of life, not just people who have parents who can afford private school.)
But I really wouldn't want to send my kid to a school that surveils students all the time, and uses garbage software like this that directly puts kids into dangerous situations. I feel like with a private school, I'd have more choice and ability to influence that sort of thing.
>> Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”
No. If you're investigating someone and have existing reason to believe they are armed then this kind of false positive might be prioritizing safety. But in a general surveillance of a public place, IMHO you need to prioritize accuracy since false positives are very bad. This kid was one itchy trigger-pull away from death over nothing - that's not erring on the side of safety. You don't have to catch every criminal by putting everyone under a microscope, you should be catching the blatantly obvious ones at scale though.
The perceived threat of government forces assaulting and potentially killing me for reasons i have no control over, this is the kind of stuff that terminates the social contract. I'd want a new state that protects me from such stuff.
Looks like per their website it did function as intended... It surfaces potential threats for the school to look at and make a human decision. The principal decided to send the police after the school safety team dismissed it as part of the correct process. I mean fire alarms go off for lots of things that are not fire alarms... This was an alert meant to be validated by a human that messed up.
> Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”
This exact scenario is discussed in [1]. The "human in the loop" failed, but we're supposed to blame the human, not the AI (or the way it was implemented). The humans serve as "moral crumple zones".
"""
The emphasis on human oversight as a protective mechanism allows governments and vendors to have it both ways: they can promote an algorithm by proclaiming how its capabilities exceed those of humans, while simultaneously defending the algorithm and those responsible for it from scrutiny by pointing to the security (supposedly) provided by human oversight.
"""
The article doesn't confirm that there was definitely a human in the loop, but it sorta suggests that police got a chance to manually verify the photo before going out to harass this poor kid.
I suspect, though, that the AI flagging that image heavily influenced the cop doing the manual review. Without the AI, I'd expect that a cop manually watching a surveillance feed would have found nothing out of the ordinary, and this wouldn't have happened.
So I agree that it's weird to just blame the human in the loop here. Certainly they share blame, but the fact of an AI model flagging this sort of thing (and doing an objectively terrible job of it) in the first place should take most of the blame here.
Q: What was the name of the Google AI Ethicist who was fired by Google for raising the concern that AI overwhelmingly negatively framed non-white humans as threats .. Timnit Gebru
We, as technologists, ARE NOT DOING BETTER. We must do better, and we are not on the "DOING BETTER" trajectory.
We talk about these "incidents" with breathless, "Wwwwellll if we just train our AI better ..." and the tragedies keep rolling.
Q2: Which of you has had a half dozen Squad Cars with Armed Police roll up on you, and treat you like you were a School Shooter? Not me, and I may reasonably assume it's because I am white, however I do eat Doritos.
> “They showed me the picture, said that looks like a gun, I said, ‘no, it’s chips.’”
So AI did the initial detection, but police looked at it and agreed. We don't see the image, but it probably did look like a gun because of a weird shadow or something.
Fundamentally, this isn't really any different from a person seeing someone with what looks like a gun and calling the cops, only it turns out the person didn't see it clearly.
The main issue is just that with increased numbers of images, there will be an increase in false positives. Can this be fixed by including multiple images, e.g. from motion of the object, so police (and the AI) can better eliminate false positives before traumatizing some poor teen?
Picture? Images? But those are just frames of footage the cameras have captured! Why would one purposefully use less information to make a decision rather than more?
Just put the full footage in front of an unbiased third party for a multi-stage verification first. The problem space isn't "is that weird shadow in the picture a gun or not?" it's "does the kid in the video have a gun?". It's not hard to figure out the difference between a bag of chips and a gun based on body language. Presumably the kid ate chips out of the bag? Using certain motions that one makes when doing that? Presumably the kids around him all saw the object in his hands and somehow did not react as if it was a gun? Jeez.
> So AI did the initial detection, but police looked at it and agreed. We don't see the image, but it probably did look like a gun because of a weird shadow or something.
Not sure I agree. The AI flagging it certainly biased the person doing the manual review toward agreeing with the AI's assessment. I can imagine a scenario where there was no AI involved, just a human watching that same surveillance feed, and (correctly) not seeing anything alarming in it.
Also I expect the AI completely failed at context. I wouldn't be surprised if the full video feed, a few minutes (or even seconds) before the flagged frame, shows the kid crumpling up the empty Doritos bag and stuffing it in his pocket. The AI probably doesn't keep all that context around to use when making a later decision, and giving just the flagged frame of video to the human may have caused them to miss out on important context.
Even better, share the frame(s) that the guess was drawn from with a human for verification before triggering ANYTHING. How much trouble could that possibly be? How many "guns" is this thing detecting in a day across all sites? I doubt more than a couple or we'd have heard about tons of incidents, false positives or not.
It’s unsurprising, since this kind of classification is only as good as the training data.
And police do this kind of stuff all the time (or in the very least you hear about it a lot if you grew up in a major city).
So if you’re gonna automate broken systems, you’re going to see a lot more of the same.
I’m not sure what the answer is but I definitely feel that “security” system like this that are purchased and rolled out need to be highly regulated and be coupled with extreme accountability and consequences for false positives.
Everything around us: political tumult and weaponization of the justice system, ICE and other capricious projections of federal authority, the failure of drug prohibition, and on and on and on, points to a very simple solution:
Abolish SWAT teams. Do away with the idea that the state employees can be permitted to be more armed than anyone else.
Blaming the so-called 'swatter' (whether it's a human or AI) is really not getting at the root of the problem.
Exactly. I wonder if this a purpose-built image-recognition system, or is it a lowest-possible effort generic image model trained on the internet? Classifying a Black high school student holding Doritos as an imminent shooting threat certainly suggests the latter.
"I am invoking my 4th and 5th amendment rights afforded to me by the Constitution of the United States of America. I have no further comment until I have consulted with and am in the presence of my legal council."
Then, just sit back and enjoy as the lawsuit unfolds.
When people wonder how can AI mistake a bag of snacks as a weapon, simply answer "42"
It is about the question, the answer will become very clear once you understand what was the question presented to the inference model, and of course what data and context was fed
Inflicting trauma on a harmless human in the name of the "safety of others" is never ok. The victim here was not unharmed, but is likely to end up with PTSD and all the mental health issues that come with it.
Imagine the head scratching that's going on with execs who are surprised things might work when a probabilistic software is being used for deterministic purposes without realizing there's a gap between it kind of by nature.
I'm sure there will be no head scratching. They already know that this can happen, and don't care, because they know that if someone gets killed because of it, they won't be held responsible. And may not even lose any customers.
the best part of the technocracy is that they're not actually all that good at anything. the second best part is that when their mistakes end in someone dead there will be some way that they're not responsible.
At least there is a check done by humans in a human way. What if this human check is removed in future, as AI decisions would be deemed no longer requiring a human inspection?
If these AI video based gun detectors are not a massive fraud I will eat one.
How on Earth does a person walk with a concealed gun? What does a woman in a skirt with one taped to her thigh walk like?
What does a man in a bulky sweatshirt with a pistol on his back walk like?
What does a teenager in wide legged cargo jeans with two pistols and a extra magazines walk like?
The brochure linked from TFA has a screenshot of a combination of segmentation and object recognition models which are fairly standard in NVRs. Quick skim of the vendor website seems to confirm this[1] and states a claim that they are not analyzing the gait.
The whole idea even accepting the core premise is OK to begin with needs to have a similar analysis applied to it that medical tests do: will there be enough false positives, with enough harm caused by them, that this is actually worse than doing nothing? Compared with likelihood of improving an outcome and how bad a failure to intervene is on average, of course.
Given that there's no relevant screening step here and it's just being applied to everyone who happens to be at a place it's truly incredible that such an analysis would shake out in favor of this tech. The false positive rate would have to be vanishingly tiny, and it's simply not plausible that's true. And that would have to be coupled with a pretty low false negative rate, or you'd need an even lower false positive rate to make up for how little good it's doing even when it's not false-positiving.
So I'm sure that analysis was either deliberately never performed, or was and then was ignored and not publicized. So, yes, it's a fraud.
(There's also the fact that as soon as these are known to be present, they'll have little or no effect on the very worst events involving firearms at schools—shooters would just avoid any scheme that involved loitering around with a firearm where the cameras can see them, and count on starting things very soon after arriving—like, once you factor in second order effects, too, there's just no hope for these standing up to real scrutiny)
The model seems pretty shitty. Does it only look on a frame-by-frame basis? Literally one second of video context and it would never make that mistake.
I can understand the outrage in this thread but literally none of what you are all calling for will be done. No one from justice or law reads HN to see what should be done. I wish folks here would keep a cooler head rather than posting lengthy rants and vents that call for punishing school staff. Really unprofessional and immature from a community that prides itself, to fall constantly into a cycle of vitriol.
Can someone outline a more pragmatic, if not likely, course of what happens next after this? Is it swept under the rug and we move on?
It wasn't sour cream and onion and didn't contain cash, so it's super sus.
But really this is typical of cop overreaction with escalation and ego rather than calm, legal, and reasonable investigation. Karens may SWAT people they don't like, but it's police officers who must use reasonableness and restraint to defend the vestiges of their impartiality and community confidence based on asking questions and gathering evidence in a legal and appropriate manner rather than rushing to conclusions. Case in point: The NYC rough false arrest of a father in front of his kid to retrieve his mis-delivered package where the egomaniacal bully cop aggressively lectures a guy for his own mistake to cover his own ego while blaming the victim: https://youtu.be/LXd-4HueHYE
With high level of hallucination, cops need to tranquilizers more. If the student had reached for his bag just before the cops arrived, BLM 2.0 would have started.
T0 be fair most commercials for Doritos, Skittles, Mentos, etc., if occurring in real life, would result in a strong police response just after they cut away.
AI is a false (political) wish, it can and never work, it is the desperation of an over extended power structure
to hold on and permanently consolodate controll of all of the worlds population, and nothing else.
the proofs are there.
philosophers mulled this over long ago and made clear statements as to why ai cant work
though not that for a second do I misdunderstand that it is "all in"
for ai, and we all get to go for the 100 trillion dollar ride to hell.
can we have truely awsome automation for manufacturing and mundane beurocratic tasks?, fuck ya we can!
but anything that requires understanding, is forever out of reach, which unfortunatly is also lacking in the people pushing, this thing, now
Before I clicked the article, I said to myself "The victim's gotta be Black", and lo and behold.
AI has inherited police's (shitty, racist, and dangerous) idea that any Black person is a dangerous monster for whom anything is a weapon.
You're free to (attempt to) amend the Second Amendment, but the Supreme Court of the United States has already affirmed and reaffirmed that individual ownership of firearms in common use is a right.
What do you propose that is "reasonable" given the frameworks established by Heller, McDonald, Caetano, and Bruen?
I can 3D print or mill basically every item you can imagine prohibiting at home: what exactly do laws do in this case?
If it's taking images every 30 seconds, it's getting 86400 x 30 = 2.5 million images per day per camera. So when it causes enormous, unnecessary trauma to one student per week, the company can rightfully claim it has less than a 1 in 10 million false positive rate.
I was unduly surprised and disappointed when I saw the photo of the kid and he turned out to be black. I would love to believe that this had no impact on how the whole thing played out, but I don't.
All right they’ve gotta have a plain clothes bro go up there make sure the kid is chill. You know the difference between a murder and not can be as little as somebody being nice
Sounds like this high school is doing a great job preparing students for the real world, where they can be swarmed by jackbooted thugs at any moment for any reason.
>Omnilert later admitted the incident was a “false positive” but claimed the system “functioned as intended,” saying its purpose is to “prioritize safety and awareness through rapid human verification.”
It's ok everyone, you're safer now that police are pointing a gun at you, because of a bag of chips ... just to be safe.
/s
Absolutely ridiculous. We're living "computer said you did it, prove otherwise, at gunpoint".
The safest thing to do is to pull all Frito Lay products off shelves until the packaging can be redesigned to ensure that AI never confuses them for guns. It's a liability issue. THINK OF THE CHILDREN.
I think it's almost guaranteed that this model has race-related biases, so no, I don't think you're kidding at all. I think it's entirely likely that an Asian (or white) kid of the same build, wearing the same clothes, with a crumpled-up bag of Doritos in his pocket, would not get flagged as having a gun.
Some comments were deferred for faster rendering.
neilv|4 months ago
It just created a situation in which a bunch of people with guns were told that some teen had a gun. That's a very unsafe situation that the system created, out of nothing.
And some teen may be traumatized. Again, unsafe.
Incidentally, the article's quotes make this teen sound more adult than anyone who sold or purchased this technology product.
omnipresent12|4 months ago
Another false positive by one of these leading content filters schools use - the kid said something stupid in a group chat and an AI reported it to the school, and the school contacted the police. The kid was arrested, stripped searched, and held for 24 hours without access to their parents or counsel. They ultimately had to spend time in probation, a full mental health evaluation, and go to an alternative school for a period of time. They are suing Gaggle, who claims they never intended their system to be used that way.
These kinds of false positives are incredibly common. I interviewed at one of their competitors (Lightspeed), and they actually provide a paid service where they have humans review all the alerts before being forwarded to the school or authorities. This is a paid addon, though.
b00ty4breakfast|4 months ago
Oh look, a corporation refusing to take responsibility for literally anything. How passe.
DrewADesign|4 months ago
Someone nearby: well what if they use it to replace human thinking instead of augment it?
Engineer: well they would be ridiculous. Nobody would ever think that’s a good idea.
Marketing Team: it seems like this lands best when positioning it as a decision-making tool. Let’s get some metrics on how much faster it is at making decisions than people are.
Sales Rep: ok, Captain, let’s dive into our flagship product, DecisionMaker Pro, the totally automated security monitoring agent…
::6 months later—some kid is being held at gunpoint over snacks.::
random3|4 months ago
janalsncm|4 months ago
But in this case both are bad. If it was a false negative students might need therapy for a more tragic reason.
Aside from improving the quality of the detection model, we should try to reduce the “cost” of both failure modes as much as possible. Putting a human in the loop or having secondary checks are ways to do that.
bilbo0s|4 months ago
Um. That's not really the danger here.
The danger is that it's as clear as day that in the future someone is gonna be killed. That's not just a possibility. It's a certainty given the way the system is set up.
This tech is not supposed to be used in this fashion. It's not ready.
tartoran|4 months ago
Make them pay money for false positives instead of direct support and counselling. This technology is not ready for production, it should be in a lab not in public buildings such as schools.
xbar|4 months ago
Decision-maker accountability is the only thing that halts bad decision-making.
dekken_|4 months ago
It already cost money paying for the time and resources to be misappropriated.
There needs to be resignations, or jail time.
unknown|4 months ago
[deleted]
russdill|4 months ago
joe_the_user|4 months ago
akoboldfrying|4 months ago
Agreed.
> This technology is not ready for production
No one wants stuff like this to happen, but nearly all technologies have risks. I don't consider that a single false positive outweighs all of its benefits; it would depend on the rates of false and true positives, and the (subjective) value of each (both high in this case, though I'd say preventing 1 shooting is unequivocally of more value than preventing 1 innocent person being unnecessarily held at gunpoint).
froobius|4 months ago
We've got a model out there now that we've just seen has put someone's life at risk... Does anyone apart from that company actually know how accurate it is? What it's been trained on? Its false positive rate? If we are going to start rolling out stuff like this, should it not be mandatory for stats / figures to be published? For us to know more about the model, and what it was trained on?
[1] https://arxiv.org/abs/1506.02640
EdwardDiego|4 months ago
dfxm12|4 months ago
[deleted]
jawns|4 months ago
He frequently criticizes the legality of police holding people at gunpoint based on flimsy information, because the law considers it a use of force, which needs to be justified. For instance, he's done several videos about "high risk" vehicle stops, where a car is mistakenly flagged as stolen, and the occupants are forced to the ground at gunpoint.
My take is that if both the AI automated system and a "human in the loop" police officer looked at the picture and reasonably believed that the object in the image was a gun, then the stop might have been justified.
But if the automated system just sent the officers out without having them review the image beforehand, that's much less reasonable justification.
cyanydeez|4 months ago
kelnos|4 months ago
But the fact that the police showed the photo does suggest that maybe they did manually review the photo before going out. If that's the case, do wonder how much the AI influenced their own judgment, though. That is, if there was no AI involved, and police were just looking at real-time surveillance footage, would they have made the same call on their own? Possibly not: it feels reasonable to assume that they let the fact of the AI flagging it override their own judgment to some degree.
unknown|4 months ago
[deleted]
hinkley|4 months ago
mentalgear|4 months ago
MiiMe19|4 months ago
seanhunter|4 months ago
protocolture|4 months ago
jason-phillips|4 months ago
[deleted]
rolph|4 months ago
prioritize your own safety by not attending any location fitted with such a system, or deemed to be such a dangerous environment that such a system is desired.
the AI "swatted" someone.
bilbo0s|4 months ago
How many packs of Twizzlers, or Doritos, or Snickers bars are out there in our schools?
First time it happens, there will be an explosion of protests. Especially now that the public knows that the system isn't working but the authorities kept using it anyway.
This is a really bad idea right now. The technology is just not there yet.
etothet|4 months ago
nyeah|4 months ago
tencentshill|4 months ago
palmotea|4 months ago
drak0n1c|4 months ago
Etheryte|4 months ago
proee|4 months ago
Now we get the privilege of walking by AI security cameras placed in random locations, hoping they don't flag us.
There's a ton of money to be made with this kind of global frisking, so lots of pressure to roll out more and more systems.
How does this not spiral out of control?
mpeg|4 months ago
A friend of mine once got pulled aside for extra checks and questioning after he had already gone through the scanners, because he was waiting for me on the other side to walk to the gates together and the agent didn't like that he was "loitering" – guess his ethnicity...
malux85|4 months ago
Email the state congressman and tell them what you think.
Since (pretty much) nobody does this, if a few hundred people do it, they will sit up and take notice. It takes less people than you might think.
Since coordinating this with a bunch of strangers (I.e. the public) is difficult, the most effective way is to normalise speaking up in our culture. Of course normalising it will increase the incoming comm rate, which will slowly decrease the effectiveness but even post that state, it’s better than where we are, which is silent public apathy
xp84|4 months ago
Note: Precheck is incredibly quick and easy to get; and GE is time consuming and annoying, but has its benefits if you travel internationallly. Both give the same benefits at TSA.
Second note: let's pretend someone replied "I shouldn't have to do that just to be treated...blah blah" and that I replied, "maybe not, but a few bucks could still solve this problem, if it bothers you enough that's worth it to you."
voidUpdate|4 months ago
Also my partner has told me that apparently my armpits sometimes smell of weed or beer, despite me not coming in contact with either of those for a very long time, and now I definitely don't want to get taken into a small room by a TSA person (After some googling, apparently those smells can be associated with high stress)
walkabout|4 months ago
(I suppose if I attended pro sports games or large concerts, I'd be doing it for those, too)
JustExAWS|4 months ago
In fact, they will probably demonize the victim to find sn excuse why he deserved to get shot.
more_corn|4 months ago
jason-phillips|4 months ago
dheera|4 months ago
rglover|4 months ago
hollow-moe|4 months ago
hanspeter|4 months ago
Instead of:
1. AI detects gun on surveillance
2. Dispatch armed police to location
It should be:
1. AI detects gun on surveillance
2. Human reviews the pictures and verifies the threat
3. Dispatch armed police to location
I think the latter version is likely what already took place in this incident, and it was actually a human that also mistook a bag of Doritos for a gun. But that version of the story is not as interesting, I guess.
shaky-carrousel|4 months ago
ggreer|4 months ago
1. https://www.nottinghammd.com/2025/10/22/student-robbed-outsi...
2. https://www.si.com/high-school/maryland/baltimore-county-hig...
3. https://www.wbaltv.com/article/knife-assault-rossville-juven...
4. https://www.wbal.com/stabbing-incident-near-kenwood-high-sch...
5. https://www.cbsnews.com/baltimore/news/teen-injured-after-re...
Havoc|4 months ago
Behold - a real life example of a "Not a hotdog" system, except this one is gun / not-a-gun.
Except the fictional one from the series was more accurate...
macintux|4 months ago
I expect a school to be smart enough to say “Yes, this is a terrible situation, and we’re taking a closer look at the risks involved here.”
JKCalhoun|4 months ago
lisbbb|4 months ago
throw7|4 months ago
lisbbb|4 months ago
We already went through this years ago with all those terrorism databases and we (humanity) have learned nothing--any database will have a percentage of erroneous data, it is impossible to eliminate erroneous data completely. Therefore, any database used to identify <fill in the blank> will have erroneous conclusions. It's been observed over and over again and governments can't help themselves that "this time it will be different because <fill in the blank> e.g. AI.
ben_w|4 months ago
Gibbon1|4 months ago
In the Menezes case the cops were playing a game of telephone that ended up with him being shot in the head.
dredmorbius|4 months ago
<https://en.wikipedia.org/wiki/Killing_of_Amadou_Diallo>
AuthAuth|4 months ago
SoftTalker|4 months ago
tamimio|4 months ago
https://www.youtube.com/watch?v=wzybp0G1hFE
StopDisinfo910|4 months ago
mns|4 months ago
unknown|4 months ago
[deleted]
coldpie|4 months ago
[deleted]
cool_man_bob|4 months ago
[deleted]
fritzo|4 months ago
I wonder how effective an apology and explanation would have been? Just some respect.
chasd00|4 months ago
cool_man_bob|4 months ago
prmoustache|4 months ago
That is the real issue.
Police force anywhere else in the world that know how to behave would have approched the student, have had a small chat with him, found out all he had in hands was a bag of doritos, maybe would have asked politely to see the content of his bag, explaining the search has been triggered by an autodetection system that may lead to occasional errors and wished him a good day.
lloda|4 months ago
balls187|4 months ago
No. Trusting AI is clearly the issue.
If there was a 9-1-1 call to the police that there was an active shooter at your kids school, how would you want the police to show up?
jmcgough|4 months ago
kayge|4 months ago
[0] https://www.youtube.com/watch?v=sIAnQwiCpRc
perplex3d|4 months ago
jmyeet|4 months ago
1. To enhance human productivity; or
2. To replace humans.
Companies, particularly in the US, very much want to go with (2) and part of the reason they can is because there are zero consequences for incidents like this.
A couple ofexamples spring to mind:
1. the UK Royal Mail scandal where a bad system accused postmasters of theft, some of whom committed suicide over the allegations. Those allegations were later proven false and it was the system's fault. IMHO the people who signed off and deployed this should be charged with negligent homicide; and
2. The Hertz case where people who had returned cars were erroneously flagged as car thieves and report was made to police. This created hell for people who would often end up with warrants they had no idea about and would be detained on random traffic stops over a car that was never stolen.
Now these aren't AI but just like the Doritos case here, the principle is the same: companies are trying to replace people with computers. In all cases, a human should be responsible for reviewing any such complaint. In the Hertz case, a human should check to see if the car is actually stolen.
In the Royal Mail situation, the system needs to show its work. Deployment should be against the existing accounting system and discrepancies between the two need to be investigated for bugs until the system is proven correct. Particularly in the early stages, a forensic accountant (if necessary) should verify that funds were actually stolen before filing a criminal complaint.
And if "false positive" criminal complaints are filed, the people who allowed that to happen, if negligent (and we all know they are), should themslves be criminally charged.
We are way too tolerant of black box systems that can result in significant harm or even death to people. Show your work. And make a human put their name and reputation to any output of such systems.
anal_reactor|4 months ago
This problem doesn't exist in Europe or Japan because guns aren't that ubiquitous, which means that the police have the time to think before they act, which makes them less likely to escalate and start shooting. Obviously, for Americans, the only solution is to get rid of the gun culture, but this will never happen, so suck it up that AI gets you swatted.
unknown|4 months ago
[deleted]
Ylpertnodi|4 months ago
>Obviously, for Americans, the only solution is to get rid of the gun culture, but this will never happen, so suck it up that AI gets you swatted.
...and you are correct.
kelnos|4 months ago
But I really wouldn't want to send my kid to a school that surveils students all the time, and uses garbage software like this that directly puts kids into dangerous situations. I feel like with a private school, I'd have more choice and ability to influence that sort of thing.
tiagod|4 months ago
doublerabbit|4 months ago
phkahler|4 months ago
No. If you're investigating someone and have existing reason to believe they are armed then this kind of false positive might be prioritizing safety. But in a general surveillance of a public place, IMHO you need to prioritize accuracy since false positives are very bad. This kid was one itchy trigger-pull away from death over nothing - that's not erring on the side of safety. You don't have to catch every criminal by putting everyone under a microscope, you should be catching the blatantly obvious ones at scale though.
blueflow|4 months ago
jharrison11|4 months ago
Its pretty clearly documented how it works here:
https://www.omnilert.com/solutions/gun-detection-system https://www.omnilert.com/solutions/ai-gun-detection https://www.omnilert.com/solutions/professional-monitoring
ignormies|4 months ago
This exact scenario is discussed in [1]. The "human in the loop" failed, but we're supposed to blame the human, not the AI (or the way it was implemented). The humans serve as "moral crumple zones".
""" The emphasis on human oversight as a protective mechanism allows governments and vendors to have it both ways: they can promote an algorithm by proclaiming how its capabilities exceed those of humans, while simultaneously defending the algorithm and those responsible for it from scrutiny by pointing to the security (supposedly) provided by human oversight. """
[1]: https://pluralistic.net/2024/10/30/a-neck-in-a-noose/
kelnos|4 months ago
I suspect, though, that the AI flagging that image heavily influenced the cop doing the manual review. Without the AI, I'd expect that a cop manually watching a surveillance feed would have found nothing out of the ordinary, and this wouldn't have happened.
So I agree that it's weird to just blame the human in the loop here. Certainly they share blame, but the fact of an AI model flagging this sort of thing (and doing an objectively terrible job of it) in the first place should take most of the blame here.
ncr100|4 months ago
"Omnilert" .. "You Have 10 Seconds To Comply"
-now targeting Black children!
Q: What was the name of the Google AI Ethicist who was fired by Google for raising the concern that AI overwhelmingly negatively framed non-white humans as threats .. Timnit Gebru
https://en.wikipedia.org/wiki/Timnit_Gebru#Exit_from_Google
We, as technologists, ARE NOT DOING BETTER. We must do better, and we are not on the "DOING BETTER" trajectory.
We talk about these "incidents" with breathless, "Wwwwellll if we just train our AI better ..." and the tragedies keep rolling.
Q2: Which of you has had a half dozen Squad Cars with Armed Police roll up on you, and treat you like you were a School Shooter? Not me, and I may reasonably assume it's because I am white, however I do eat Doritos.
crazygringo|4 months ago
> “They showed me the picture, said that looks like a gun, I said, ‘no, it’s chips.’”
So AI did the initial detection, but police looked at it and agreed. We don't see the image, but it probably did look like a gun because of a weird shadow or something.
Fundamentally, this isn't really any different from a person seeing someone with what looks like a gun and calling the cops, only it turns out the person didn't see it clearly.
The main issue is just that with increased numbers of images, there will be an increase in false positives. Can this be fixed by including multiple images, e.g. from motion of the object, so police (and the AI) can better eliminate false positives before traumatizing some poor teen?
Mawr|4 months ago
Just put the full footage in front of an unbiased third party for a multi-stage verification first. The problem space isn't "is that weird shadow in the picture a gun or not?" it's "does the kid in the video have a gun?". It's not hard to figure out the difference between a bag of chips and a gun based on body language. Presumably the kid ate chips out of the bag? Using certain motions that one makes when doing that? Presumably the kids around him all saw the object in his hands and somehow did not react as if it was a gun? Jeez.
kelnos|4 months ago
Not sure I agree. The AI flagging it certainly biased the person doing the manual review toward agreeing with the AI's assessment. I can imagine a scenario where there was no AI involved, just a human watching that same surveillance feed, and (correctly) not seeing anything alarming in it.
Also I expect the AI completely failed at context. I wouldn't be surprised if the full video feed, a few minutes (or even seconds) before the flagged frame, shows the kid crumpling up the empty Doritos bag and stuffing it in his pocket. The AI probably doesn't keep all that context around to use when making a later decision, and giving just the flagged frame of video to the human may have caused them to miss out on important context.
axus|4 months ago
https://en.wikipedia.org/wiki/Computer_says_no
beloch|4 months ago
e.g. Not "this student has a gun" but "this model says the student has a gun with a probability of 60%".
If an AI can't quantify it's degree of confidence, it shouldn't be used for this sort of thing.
xp84|4 months ago
I wanna see the frames too.
tecoholic|4 months ago
neverkn0wsb357|4 months ago
And police do this kind of stuff all the time (or in the very least you hear about it a lot if you grew up in a major city).
So if you’re gonna automate broken systems, you’re going to see a lot more of the same.
I’m not sure what the answer is but I definitely feel that “security” system like this that are purchased and rolled out need to be highly regulated and be coupled with extreme accountability and consequences for false positives.
jMyles|4 months ago
Abolish SWAT teams. Do away with the idea that the state employees can be permitted to be more armed than anyone else.
Blaming the so-called 'swatter' (whether it's a human or AI) is really not getting at the root of the problem.
chvid|4 months ago
hsbauauvhabzb|4 months ago
12_throw_away|4 months ago
more_corn|4 months ago
I thought those two things were impossible?
teeray|4 months ago
"Sorry, that's Nacho gun"
tomxor|4 months ago
lunias|4 months ago
Then, just sit back and enjoy as the lawsuit unfolds.
uda|4 months ago
It is about the question, the answer will become very clear once you understand what was the question presented to the inference model, and of course what data and context was fed
aussieguy1234|4 months ago
I hope they sue the police department over this.
j45|4 months ago
Imagine the head scratching that's going on with execs who are surprised things might work when a probabilistic software is being used for deterministic purposes without realizing there's a gap between it kind of by nature.
doublerabbit|4 months ago
I can't. The execs won't care and probably in their sadist ways, cheer.
kelnos|4 months ago
ratelimitsteve|4 months ago
kirykl|4 months ago
1970-01-01|4 months ago
zkmon|4 months ago
Ylpertnodi|4 months ago
idontwantthis|4 months ago
How on Earth does a person walk with a concealed gun? What does a woman in a skirt with one taped to her thigh walk like? What does a man in a bulky sweatshirt with a pistol on his back walk like? What does a teenager in wide legged cargo jeans with two pistols and a extra magazines walk like?
VTimofeenko|4 months ago
[1]: https://www.omnilert.com/blog/what-is-visual-gun-detection-t...
walkabout|4 months ago
Given that there's no relevant screening step here and it's just being applied to everyone who happens to be at a place it's truly incredible that such an analysis would shake out in favor of this tech. The false positive rate would have to be vanishingly tiny, and it's simply not plausible that's true. And that would have to be coupled with a pretty low false negative rate, or you'd need an even lower false positive rate to make up for how little good it's doing even when it's not false-positiving.
So I'm sure that analysis was either deliberately never performed, or was and then was ignored and not publicized. So, yes, it's a fraud.
(There's also the fact that as soon as these are known to be present, they'll have little or no effect on the very worst events involving firearms at schools—shooters would just avoid any scheme that involved loitering around with a firearm where the cameras can see them, and count on starting things very soon after arriving—like, once you factor in second order effects, too, there's just no hope for these standing up to real scrutiny)
15155|4 months ago
Gait analysis is really good these days, but normal, small objects in a bag don't impact your gait.
gdulli|4 months ago
programjames|4 months ago
iamleppert|4 months ago
uvaursi|4 months ago
Can someone outline a more pragmatic, if not likely, course of what happens next after this? Is it swept under the rug and we move on?
leptons|4 months ago
4ndrewl|4 months ago
That ship has long sailed buddy.
lawiejtrlj|4 months ago
rs186|4 months ago
* the student was black
Is that really a coincidence?
It's just a matter of time before this or something worse happens.
hshdhdhehd|4 months ago
unknown|4 months ago
[deleted]
dpk84|4 months ago
mchannon|4 months ago
ED-209 mistakenly viewed a young man as armed, blows him away in the corporate boardroom.
The article even included an homage to:
“Dick, I’m very disappointed in you.”
“It’s just a small glitch.”
SanjayMehta|4 months ago
Edit: And racism. Just watched the video.
BeetleB|4 months ago
The real question is: Would this have happened in an upper/middle class school.
The student has dark skin. And is attending a school in a crime ridden neighborhood.
Were it a white student in a low crime neighborhood, would they have approached him with guns drawn?
The AI failure is masking the real problem - bad police behavior.
ninalanyon|4 months ago
anothernewdude|4 months ago
gnarlouse|4 months ago
sans_souse|4 months ago
That would have been bold
burnt-resistor|4 months ago
But really this is typical of cop overreaction with escalation and ego rather than calm, legal, and reasonable investigation. Karens may SWAT people they don't like, but it's police officers who must use reasonableness and restraint to defend the vestiges of their impartiality and community confidence based on asking questions and gathering evidence in a legal and appropriate manner rather than rushing to conclusions. Case in point: The NYC rough false arrest of a father in front of his kid to retrieve his mis-delivered package where the egomaniacal bully cop aggressively lectures a guy for his own mistake to cover his own ego while blaming the victim: https://youtu.be/LXd-4HueHYE
nullbyte808|4 months ago
vezycash|4 months ago
satisfice|4 months ago
metalman|4 months ago
the proofs are there.
philosophers mulled this over long ago and made clear statements as to why ai cant work
though not that for a second do I misdunderstand that it is "all in" for ai, and we all get to go for the 100 trillion dollar ride to hell.
can we have truely awsome automation for manufacturing and mundane beurocratic tasks?, fuck ya we can!
but anything that requires understanding, is forever out of reach, which unfortunatly is also lacking in the people pushing, this thing, now
thescriptkiddie|4 months ago
whycome|4 months ago
“Computer says die”
unknown|4 months ago
[deleted]
johnnyApplePRNG|4 months ago
I hope this kid gets what he deserves.
What a tragedy. I'm sure racial profiling on behalf of the AI and the police had absolutely nothing to do with it.
blindriver|4 months ago
kelnos|4 months ago
Because that's not what slander is.
adxl|4 months ago
twoquestions|4 months ago
adam12|4 months ago
15155|4 months ago
What do you propose that is "reasonable" given the frameworks established by Heller, McDonald, Caetano, and Bruen?
I can 3D print or mill basically every item you can imagine prohibiting at home: what exactly do laws do in this case?
nickdothutton|4 months ago
dgacmu|4 months ago
(* see also "how to lie with statistics").
scotty79|4 months ago
bobbyprograms|4 months ago
pickleglitch|4 months ago
G_o_D|4 months ago
balls187|4 months ago
Fuck you.
Lucian6|4 months ago
[deleted]
duxup|4 months ago
It's ok everyone, you're safer now that police are pointing a gun at you, because of a bag of chips ... just to be safe.
/s
Absolutely ridiculous. We're living "computer said you did it, prove otherwise, at gunpoint".
walkabout|4 months ago
cranberryturkey|4 months ago
stockresearcher|4 months ago
slipperybeluga|4 months ago
[deleted]
6stringmerc|4 months ago
Or am I kidding? AI is only as good as its training and humans are...not bastions of integrity...
kelnos|4 months ago
AndrewKemendo|4 months ago
einrealist|4 months ago
malux85|4 months ago
But ……
Doritos should definitely use this as an advertisement, Doritos - The only weapon of mass deliciousness, or something like that
And of course pay the kid, so something positive came come out of the experience for him
sebastiennight|4 months ago
rkomorn|4 months ago
[deleted]