top | item 39690354

Artificial Intelligence Act: MEPs adopt law

62 points| ericb | 2 years ago |europarl.europa.eu | reply

59 comments

order
[+] kylecazar|2 years ago|reply
How do we prevent humans from wholly relying on/trusting AI in systems that the world's stability relies on.

That to me is the biggest 'unacceptable risk'. A human putting a system we don't fully understand in charge of a critical process.

Article says all high risk systems will be 'assessed' before being put to market. There's a limit to how well you can understand the side effects before deployment.

[+] Jensson|2 years ago|reply
> How do we prevent humans from wholly relying on/trusting AI in systems that the world's stability relies on.

By being transparent and disclosing how the AI was trained and what content was generated by AI. That removes the magic, once people understand how simple these things are and how easy it is to tune the results to make the AI say whatever you want people will stop trusting them quickly.

And that is part of what these regulations are about. Transparency instead of marketing buzzwords.

[+] tensor|2 years ago|reply
As long as it's properly tested in terms of out-of-training testing data with sufficient statistics, I'd welcome AI decision making over human any day. When you have humans in charge you know for sure that you have a substantially biased decision maker with completely unknown error levels.
[+] andrewmutz|2 years ago|reply
Today we trust humans to manage critical processes.

In the future we will need to trust humans to not run those processes with unreliable tools.

This isn't just about AI. How do we trust the humans who manage critical processes to follow good security practices (keeping systems patched, etc)?

[+] grogenaut|2 years ago|reply
Do you trust Boeing management or safety right now? Memes about MBAs aside, they're not using AI yet still to most people making bad decisions. Eg this problem already exists. It could be exacerbated by people willing to let ai take the wheel
[+] RandomLensman|2 years ago|reply
What critical processes are you thinking about here?
[+] malermeister|2 years ago|reply
> Unacceptable risk AI systems are systems considered a threat to people and will be banned. They include:

* Cognitive behavioural manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behaviour in children

* Social scoring: classifying people based on behaviour, socio-economic status or personal characteristics

* Biometric identification and categorisation of people

* Real-time and remote biometric identification systems, such as facial recognition

That seems... reasonable? I know regulation is a trigger word for some folks around here, but I think preventing a Chinese-style social credit system seems like a positive thing?

[+] xoa|2 years ago|reply
From an admittedly brief look (while interesting it has no direct immediate relevance to me), I can't tell if this applies universally regardless of scale, or if there is any sort of gradient. Does anyone know? Some of it seems reasonable period at any scale, but some of it would be worrisome or strike me as unreasonable/negative if it applied to individuals as opposed to larger network where emergent effects happen. To take a personal example, I selfhost Blue Iris and am using some AI recognition in my home security system, which I also use for wildlife monitoring. It's really awesome vs the old plain motion or even motion zones, and as there is zero exposure to the internet, nor even any view of public property at all (end of 1/3 mile driveway with woods). It is useful to be able to classify known (me, family, regular neighbors) vs unknown people as well. I think the societal harm potential of an isolated system like that vs the individual benefit is low, and thus I'd lean towards skepticism if that too fell under their banner of "Biometric identification and categorisation of people" or "Real-time and remote biometric identification systems, such as facial recognition".

But of course in a public setting, and/or networked with lots of others with enough scale to start effectively following people through space and time not just capturing one private slice, the metrics all change. So as I watch various AI efforts I'm always thinking about how they address that (or not). I don't think "profit" is even the deciding thing here, something can not be commercial but still potentially a problem at enough scale.

[+] tensor|2 years ago|reply
Social scoring is an interesting topic. E.g. why is it not banned if you don't use AI? I'd guess that most social scoring systems today are not actually AI based. Perhaps they have some components that come from AI, but it's easy enough to make one with simpler mechanisms.

Also, I assume insurance risk scoring is not banned, even though that is already almost a social scoring system. E.g. you live in a dangerous poor neighbourhood, higher insurance! You are a given age, higher insurance!

[+] ensignavenger|2 years ago|reply
It all comes down to the details. A meme sharing site that uses a computerized recommendation engine to recomend memes to users might be considered "cognitive behavioral manipulation".
[+] _3u10|2 years ago|reply
Yeah it will be an EU style credit score.
[+] ecmascript|2 years ago|reply
I don't know what I think about this, probably it's not great. I'm afraid this will hinder smaller developers and enable larger corporations to get ahead just because extra legalize.

Also, why is some stuff unacceptable but acceptable for the government? That seems a bit like censorship to me. The EU is really becoming a large organization that just does random shit that people don't really ask for.

My government still publishes my personal data online for anyone to see, no one seems to care about that. Rules for thee but not for me.

[+] estebarb|2 years ago|reply
I get the good intentions, but I don't like that the restrictions may be too broad and may hinder useful innovation.

For example: I once got trapped against the door in a building with a fire alert. The door was badge controlled and a several panicked people were pressing me against the door. I would loved a surveillance system to detect scared people and decide to unlock the doors automatically in an emergency.

We should punish bad usages, not ban broad categories just because politicians lack creativity.

[+] Bjorkbat|2 years ago|reply
You don't necessarily need AI to create a door that opens during a fire alert. You could simply make it so that the door is no longer badge controlled if a fire alert goes off.

Sure, it's possible someone might exploit this by creating a fake fire alert to open the door, but they could also trick the AI you suggested by using a fake panicked expression, maybe get some friends in on the prank.

Kind of a segue into why I think AI hype and crypto hype are uncannily similar. You could argue that blockchains have a lot of potential uses, it's just that we already have some kind of legacy technology that works fine and is usually much cheaper. In much the same way, I'd say 90% of people are using LLMs as search engines, albeit search engines that are more expensive and return strange results.

Anyway, point being, 99% of the applications one could imagine getting sniped by this law probably can probably be implemented using alternative, non-AI technologies.

[+] NotGMan|2 years ago|reply
>> Designing the model to prevent it from generating illegal content

Impossible.

>> That is why it requires that national authorities provide companies with a testing environment that simulates conditions close to the real world.

Who defines "close to real world"?

[+] djyaz1200|2 years ago|reply
Interested to see what comes of this... "Publishing summaries of copyrighted data used for training"
[+] Vuizur|2 years ago|reply
Everybody who has released a decent model has used a shadow library dump for training. Are they going to admit that they illegally downloaded more than 20 million books?
[+] Argonaut998|2 years ago|reply
There’s zero reason to found a AI business in the EU when the same opportunity exists in the US. How long until the likes of Mistral escape?
[+] malermeister|2 years ago|reply
Yes there is. I've lived in both the EU and the US for decades. The American lifestyle sucks. Don't underestimate that.
[+] amadeuspagel|2 years ago|reply
Didn't the mistral founders work in the US and move back precisely to found a company in france?
[+] Jensson|2 years ago|reply
EU is much cheaper, that is a good reason, you can still sell in USA even if you develop the AI in Europe. Deepmind still operators from Europe.
[+] andersa|2 years ago|reply
People are currently lobbying in the US to set up much worse legislation.
[+] lawlessone|2 years ago|reply
This really seems to just be an extension of GDPR.

Mostly just don't use AI to discriminate or manipulate people.

quite light touch and Not at all as bad as I was expecting.

[+] kabigon|2 years ago|reply
We need to act on AI now. We need to limit it's ubiquity, we cannot allow it to take ANYONE's job. We are heading towards absolute doom if we allow this. Humanity will be doomed.
[+] ben_w|2 years ago|reply
Technophobia is a constant surprise to me, especially here. But then, I'm weirdly far in the direction of embracing change — it took me until my 30s to learn about the Chesterton's Fence version of conservatism, and through that example that there was any form of conservatism which had merit.

Looking it up, I'm surprised how common technophobia is, 85-90%[0]; I should be more mindful of this.

I was going to say something about Marx welcoming this, but this time I found the issues of industrialisation and who benefits from the investments went further back than I previously thought: https://en.wikipedia.org/wiki/Protection_of_Stocking_Frames,...

[0] https://web.archive.org/web/20080511165100/http://www.learni...

[+] dr_dshiv|2 years ago|reply
Proposed: The laws were drafted in the age of narrow AI and have little relevance in the age of general AI (LLMs, etc).
[+] Jensson|2 years ago|reply
I think these parts are relevant, it will be very nice to get this for LLM and image generators:

> Generative AI, like ChatGPT, will not be classified as high-risk, but will have to comply with transparency requirements and EU copyright law:

> Disclosing that the content was generated by AI

> Designing the model to prevent it from generating illegal content

> Publishing summaries of copyrighted data used for training

This means that now if an article was generated by LLM they are legally required to disclose that.

[+] zoobab|2 years ago|reply
Censorship.
[+] pelorat|2 years ago|reply
What. AI systems and LLM's do not have the same rights as humans do. In Europe corporations are thankfully not considered people like in the USA.
[+] seydor|2 years ago|reply
Premature legislation misses the target. Gdpr targeted personal data collection, a relatively anodyne adverse effect of social media, but missed addiction which is the real damaging externality. I wonder what they miss this time