How do we prevent humans from wholly relying on/trusting AI in systems that the world's stability relies on.
That to me is the biggest 'unacceptable risk'. A human putting a system we don't fully understand in charge of a critical process.
Article says all high risk systems will be 'assessed' before being put to market. There's a limit to how well you can understand the side effects before deployment.
> How do we prevent humans from wholly relying on/trusting AI in systems that the world's stability relies on.
By being transparent and disclosing how the AI was trained and what content was generated by AI. That removes the magic, once people understand how simple these things are and how easy it is to tune the results to make the AI say whatever you want people will stop trusting them quickly.
And that is part of what these regulations are about. Transparency instead of marketing buzzwords.
As long as it's properly tested in terms of out-of-training testing data with sufficient statistics, I'd welcome AI decision making over human any day. When you have humans in charge you know for sure that you have a substantially biased decision maker with completely unknown error levels.
Do you trust Boeing management or safety right now? Memes about MBAs aside, they're not using AI yet still to most people making bad decisions. Eg this problem already exists. It could be exacerbated by people willing to let ai take the wheel
> Unacceptable risk AI systems are systems considered a threat to people and will be banned. They include:
* Cognitive behavioural manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behaviour in children
* Social scoring: classifying people based on behaviour, socio-economic status or personal characteristics
* Biometric identification and categorisation of people
* Real-time and remote biometric identification systems, such as facial recognition
That seems... reasonable? I know regulation is a trigger word for some folks around here, but I think preventing a Chinese-style social credit system seems like a positive thing?
From an admittedly brief look (while interesting it has no direct immediate relevance to me), I can't tell if this applies universally regardless of scale, or if there is any sort of gradient. Does anyone know? Some of it seems reasonable period at any scale, but some of it would be worrisome or strike me as unreasonable/negative if it applied to individuals as opposed to larger network where emergent effects happen. To take a personal example, I selfhost Blue Iris and am using some AI recognition in my home security system, which I also use for wildlife monitoring. It's really awesome vs the old plain motion or even motion zones, and as there is zero exposure to the internet, nor even any view of public property at all (end of 1/3 mile driveway with woods). It is useful to be able to classify known (me, family, regular neighbors) vs unknown people as well. I think the societal harm potential of an isolated system like that vs the individual benefit is low, and thus I'd lean towards skepticism if that too fell under their banner of "Biometric identification and categorisation of people" or "Real-time and remote biometric identification systems, such as facial recognition".
But of course in a public setting, and/or networked with lots of others with enough scale to start effectively following people through space and time not just capturing one private slice, the metrics all change. So as I watch various AI efforts I'm always thinking about how they address that (or not). I don't think "profit" is even the deciding thing here, something can not be commercial but still potentially a problem at enough scale.
Social scoring is an interesting topic. E.g. why is it not banned if you don't use AI? I'd guess that most social scoring systems today are not actually AI based. Perhaps they have some components that come from AI, but it's easy enough to make one with simpler mechanisms.
Also, I assume insurance risk scoring is not banned, even though that is already almost a social scoring system. E.g. you live in a dangerous poor neighbourhood, higher insurance! You are a given age, higher insurance!
It all comes down to the details. A meme sharing site that uses a computerized recommendation engine to recomend memes to users might be considered "cognitive behavioral manipulation".
I don't know what I think about this, probably it's not great. I'm afraid this will hinder smaller developers and enable larger corporations to get ahead just because extra legalize.
Also, why is some stuff unacceptable but acceptable for the government? That seems a bit like censorship to me. The EU is really becoming a large organization that just does random shit that people don't really ask for.
My government still publishes my personal data online for anyone to see, no one seems to care about that. Rules for thee but not for me.
I get the good intentions, but I don't like that the restrictions may be too broad and may hinder useful innovation.
For example: I once got trapped against the door in a building with a fire alert. The door was badge controlled and a several panicked people were pressing me against the door. I would loved a surveillance system to detect scared people and decide to unlock the doors automatically in an emergency.
We should punish bad usages, not ban broad categories just because politicians lack creativity.
You don't necessarily need AI to create a door that opens during a fire alert. You could simply make it so that the door is no longer badge controlled if a fire alert goes off.
Sure, it's possible someone might exploit this by creating a fake fire alert to open the door, but they could also trick the AI you suggested by using a fake panicked expression, maybe get some friends in on the prank.
Kind of a segue into why I think AI hype and crypto hype are uncannily similar. You could argue that blockchains have a lot of potential uses, it's just that we already have some kind of legacy technology that works fine and is usually much cheaper. In much the same way, I'd say 90% of people are using LLMs as search engines, albeit search engines that are more expensive and return strange results.
Anyway, point being, 99% of the applications one could imagine getting sniped by this law probably can probably be implemented using alternative, non-AI technologies.
Everybody who has released a decent model has used a shadow library dump for training. Are they going to admit that they illegally downloaded more than 20 million books?
We need to act on AI now. We need to limit it's ubiquity, we cannot allow it to take ANYONE's job. We are heading towards absolute doom if we allow this. Humanity will be doomed.
I hate to break it to you but AI has been ubiquitous and taking people's jobs for decades now. [0] is an interesting article on the subject. The cat's out of the bag, and humanity isn't doomed yet (at least not due to AI). I'm not convinced.
Technophobia is a constant surprise to me, especially here. But then, I'm weirdly far in the direction of embracing change — it took me until my 30s to learn about the Chesterton's Fence version of conservatism, and through that example that there was any form of conservatism which had merit.
Looking it up, I'm surprised how common technophobia is, 85-90%[0]; I should be more mindful of this.
I was going to say something about Marx welcoming this, but this time I found the issues of industrialisation and who benefits from the investments went further back than I previously thought: https://en.wikipedia.org/wiki/Protection_of_Stocking_Frames,...
Premature legislation misses the target. Gdpr targeted personal data collection, a relatively anodyne adverse effect of social media, but missed addiction which is the real damaging externality. I wonder what they miss this time
[+] [-] kylecazar|2 years ago|reply
That to me is the biggest 'unacceptable risk'. A human putting a system we don't fully understand in charge of a critical process.
Article says all high risk systems will be 'assessed' before being put to market. There's a limit to how well you can understand the side effects before deployment.
[+] [-] Jensson|2 years ago|reply
By being transparent and disclosing how the AI was trained and what content was generated by AI. That removes the magic, once people understand how simple these things are and how easy it is to tune the results to make the AI say whatever you want people will stop trusting them quickly.
And that is part of what these regulations are about. Transparency instead of marketing buzzwords.
[+] [-] tensor|2 years ago|reply
[+] [-] andrewmutz|2 years ago|reply
In the future we will need to trust humans to not run those processes with unreliable tools.
This isn't just about AI. How do we trust the humans who manage critical processes to follow good security practices (keeping systems patched, etc)?
[+] [-] grogenaut|2 years ago|reply
[+] [-] RandomLensman|2 years ago|reply
[+] [-] malermeister|2 years ago|reply
* Cognitive behavioural manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behaviour in children
* Social scoring: classifying people based on behaviour, socio-economic status or personal characteristics
* Biometric identification and categorisation of people
* Real-time and remote biometric identification systems, such as facial recognition
That seems... reasonable? I know regulation is a trigger word for some folks around here, but I think preventing a Chinese-style social credit system seems like a positive thing?
[+] [-] xoa|2 years ago|reply
But of course in a public setting, and/or networked with lots of others with enough scale to start effectively following people through space and time not just capturing one private slice, the metrics all change. So as I watch various AI efforts I'm always thinking about how they address that (or not). I don't think "profit" is even the deciding thing here, something can not be commercial but still potentially a problem at enough scale.
[+] [-] tensor|2 years ago|reply
Also, I assume insurance risk scoring is not banned, even though that is already almost a social scoring system. E.g. you live in a dangerous poor neighbourhood, higher insurance! You are a given age, higher insurance!
[+] [-] ensignavenger|2 years ago|reply
[+] [-] _3u10|2 years ago|reply
[+] [-] ecmascript|2 years ago|reply
Also, why is some stuff unacceptable but acceptable for the government? That seems a bit like censorship to me. The EU is really becoming a large organization that just does random shit that people don't really ask for.
My government still publishes my personal data online for anyone to see, no one seems to care about that. Rules for thee but not for me.
[+] [-] estebarb|2 years ago|reply
For example: I once got trapped against the door in a building with a fire alert. The door was badge controlled and a several panicked people were pressing me against the door. I would loved a surveillance system to detect scared people and decide to unlock the doors automatically in an emergency.
We should punish bad usages, not ban broad categories just because politicians lack creativity.
[+] [-] Bjorkbat|2 years ago|reply
Sure, it's possible someone might exploit this by creating a fake fire alert to open the door, but they could also trick the AI you suggested by using a fake panicked expression, maybe get some friends in on the prank.
Kind of a segue into why I think AI hype and crypto hype are uncannily similar. You could argue that blockchains have a lot of potential uses, it's just that we already have some kind of legacy technology that works fine and is usually much cheaper. In much the same way, I'd say 90% of people are using LLMs as search engines, albeit search engines that are more expensive and return strange results.
Anyway, point being, 99% of the applications one could imagine getting sniped by this law probably can probably be implemented using alternative, non-AI technologies.
[+] [-] NotGMan|2 years ago|reply
Impossible.
>> That is why it requires that national authorities provide companies with a testing environment that simulates conditions close to the real world.
Who defines "close to real world"?
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] taway_6PplYu5|2 years ago|reply
[+] [-] djyaz1200|2 years ago|reply
[+] [-] Vuizur|2 years ago|reply
[+] [-] Argonaut998|2 years ago|reply
[+] [-] malermeister|2 years ago|reply
[+] [-] amadeuspagel|2 years ago|reply
[+] [-] Jensson|2 years ago|reply
[+] [-] andersa|2 years ago|reply
[+] [-] lawlessone|2 years ago|reply
Mostly just don't use AI to discriminate or manipulate people.
quite light touch and Not at all as bad as I was expecting.
[+] [-] kabigon|2 years ago|reply
[+] [-] free_bip|2 years ago|reply
[0]: https://sitn.hms.harvard.edu/flash/2017/history-artificial-i...
[+] [-] ben_w|2 years ago|reply
Looking it up, I'm surprised how common technophobia is, 85-90%[0]; I should be more mindful of this.
I was going to say something about Marx welcoming this, but this time I found the issues of industrialisation and who benefits from the investments went further back than I previously thought: https://en.wikipedia.org/wiki/Protection_of_Stocking_Frames,...
[0] https://web.archive.org/web/20080511165100/http://www.learni...
[+] [-] TkTech|2 years ago|reply
Edit: The link has since been updated :)
[+] [-] dr_dshiv|2 years ago|reply
[+] [-] Jensson|2 years ago|reply
> Generative AI, like ChatGPT, will not be classified as high-risk, but will have to comply with transparency requirements and EU copyright law:
> Disclosing that the content was generated by AI
> Designing the model to prevent it from generating illegal content
> Publishing summaries of copyrighted data used for training
This means that now if an article was generated by LLM they are legally required to disclose that.
[+] [-] zoobab|2 years ago|reply
[+] [-] pelorat|2 years ago|reply
[+] [-] seydor|2 years ago|reply