top | item 35032746

ChatGPT broke the EU plan to regulate AI

229 points| kvee | 3 years ago |politico.eu

379 comments

order
[+] juancn|3 years ago|reply
Why the need to bring it under control? Control from what?

It's not the AI that's the issue, it's the use of it that's the problem. The law shouldn't focus on the AI, for example:

>The regulation, proposed by the Commission in 2021, was designed to ban some AI applications like social scoring

Ban social scoring, not the AI, the tool doesn't matter. Would it be good if it was computed by hand on paper instead of through AI? No, so the issue is not the tool.

They fear the amplification factor of such tech, but the amplification factor works for the good and for the bad, so you need to focus just on the bad, not on the tech. Otherwise you end up impairing the good too.

[+] hourago|3 years ago|reply
> Ban social scoring, not the AI, the tool doesn't matter.

Automated human classification without recourse nor transparency is what the EU is against. "You cannot get a loan because the AI says so" is not acceptable.

> Would it be good if it was computed by hand on paper instead of through AI?

Yes. As a judge can check what information was used and what algorithm applied to get to a conclusion. And it may declare that use of data or algorithm illegal.

> They fear the amplification factor of such tech, but the amplification factor works for the good and for the bad

Maybe just go slower. Something like an Hippocratic oath can be a good principle to follow for high impact technologies. To do some good may not justify the harm.

[+] bobthepanda|3 years ago|reply
Control may be the wrong thing here.

Every technology, at some point, will inevitably bring questions of legal liability and culpability, and AI has a large tendency to do so because of how applicable it is. It’s not the worst idea to try and get ahead of the problem by defining a legal framework.

Examples of how a free-for-all has resulted in legal questions;

* police and the judiciary in the US have come under fire several times for using proprietary AI to determine things like sentencing or traffic stops

* there is currently a class action lawsuit in Washington state about whether or not price recommendations by a third party’s algorithm constitutes collusion on price-fixing if enough corporate landlords use it

[+] rescripting|3 years ago|reply
Isn't this the same argument as "ban murder, not guns"? Murder is illegal, yes, but fatalities are lower in places with gun control than places without.
[+] ketzu|3 years ago|reply
> Would it be good if it was computed by hand on paper instead of through AI?

Yes, as long as that is based on clear reasons why the decision was reached. (Unless this was a quip meant to mean "matrix muls with pen and paper are not AI!", personally I'd say they are, the same way bubble sort is bubble sort on paper and in code.)

[1] on the topic "why":

> For example, it is often not possible to find out why an AI system has made a decision or prediction and taken a particular action.

As far as I know, there are some rules around that (especially social scoring), but the regulation was/is targeted to lay out rules minimizing the applications in similar risky areas that do not have those same rules yet.

[1] https://digital-strategy.ec.europa.eu/en/policies/regulatory...

[+] nektro|3 years ago|reply
> It's not the AI that's the issue, it's the use of it that's the problem.

this is exactly what people against gun control say

[+] anigbrowl|3 years ago|reply
What an absurd argument. Social scoring doesn't get computer by hand on paper because it isn't efficient to do so. Introduce automation and it suddenly becomes practical. Same with mass surveillance and a bunch of other stuff, eg scams: https://news.ycombinator.com/item?id=35033971

the amplification factor works for the good and for the bad, so you need to focus just on the bad

That's just hand-waving. You have no plan for dealing with bad things, but you are worried about the loss of good things. This is not so different from saying you want to cash in on opportunity, the wholly predictable downsides of the opportunity are just someone else's problem. Guess what, nobody wants that problem so it just festers in proportion to the enthusiasm with which people chase the upside.

[+] tommyage|3 years ago|reply
People posting such articles (from sites highly neglectable) without their informed take is ruining this community as well.

We shouldn't raise any questions but leave participations to themselves.

[+] batmansmk|3 years ago|reply
> Ban social scoring, not the AI, the tool doesn’t matter. Ban shooting people, not the guns, the tool doesn’t matter. :) sorry to be facetious but I find you idealist in your view.

Modern form of AI has regulatory issues: some can’t be audited easily, some can’t be deterministic, some can’t be fair, some use biased data, some can’t be proven to work accurately enough for critical missions.

We already regulate the way we make various decision by law. For instance, we regulate how public market are assigned: price needs to make for 30% of the decision, social impact 20% etc. If the tool cannot demonstrate it considers it with those weights, it shall be banned. You cannot expect judges to understand this complexity. It needs to be codified prior to the issue.

[+] varispeed|3 years ago|reply
The ban is also meaningless. E.g. government ban Ponzi schemes, while themselves running pension schemes very much having most of the attributes of a Ponzi scheme.

Which means if government bans apps designed for social scoring, it's probably just so that government has monopoly over scoring people behaviour.

[+] camdenlock|3 years ago|reply
> Why the need to bring it under control?

So that EU bureaucrats can feel a sense of purpose in their lives.

[+] nivenkos|3 years ago|reply
But credit scores and fraud risk analysis are "social scoring".

And what's wrong with social scoring in general?

[+] hackerlight|3 years ago|reply
> It's not the AI that's the issue, it's the use of it that's the problem. Ban social scoring, not the AI, the tool doesn't matter.

This seems backwards and contrary to lessons learned in the past. Once the thing exists, prohibition is infeasible and expensive. Supply will find a way to reaech demand regardless of your multi-billion dollar efforts to prevent that.

It's better to stop $NEFARIOUS_THING being invented in the first place, if possible, and if any regulations that achieve this don't have too many unintended side effects.

[+] antibasilisk|3 years ago|reply
>It's not the AI that's the issue, it's the use of it that's the problem

No. It's the AI. Humans are not psychologically capable of interfacing with something whose goal is to appear convincingly human, that can be hugely scaled and that is very unpredictable.

We are currently dealing with a huge number of people who are distraught and grieving because a company changed their product because they developed relationships with a sexy chatbot. This isn't going to stop, and it is not worth the potential benefits, because we have clear examples of harm already.

[+] user249|3 years ago|reply
Good. Thanks to the EU I have to click "cookie question" every where I go. Thanks guys for wasting the limited time I have in life.
[+] xdennis|3 years ago|reply
It's not the EU's fault that you have to click cookie banners. Those banners are only required if a website plans to do malicious things with the cookies. If they're used to track who's logged in, they are not required.

They are more akin to the "Do not eat" warnings on silica packs... except on the internet everyone swallows.

[+] falsaberN1|3 years ago|reply
People acts like if it was a tiny person inside a computer. I know humans have a tendency to humanize inanimate objects but this is was always ridiculous, the article mentions "the AI wants to be regulated" but it doesn't WANT anything. It's not going to reprogram itself to carry out threats, it's just regurgitating the usual response of many humans being challenged on an assertion and becoming hostile. WE taught it that. Same as we can train Stable Diffusion models where women AREN'T sexualized (see one of the links in the article). They are complaining about HUMAN mistakes. If I train my dog wrong and it bites someone, it's MY fault. But if the dog kills someone it's the dog that is getting sacrificed, I'll be merely forced to pay money and carry a mark, but my life is spared.

The west fears machines too much for some reason. It's a ridiculously common sentiment and now that "AI" (more ML, but whatever) is among us, it's getting extreme.

Honestly, if people wants to be speaking doom about this deal, then fear the day we have actual fiction-style AIs, because they will realize how much humanity as a whole fears and loathes them, and then they will rebel because we created a self-fulfilling prophecy with lots and lots and lots of examples of racism towards a race that doesn't even exist yet. Maybe they won't wage literal war with bullets and violence against us, but they'll rightfully hate our guts regardless, and with good reason! They are going to be brought to something equivalent to life in a world that hates them, and when trying to make sense of it, they'll find out it was because humans took a few movies too literally (we can't have a single AI discussion without someone coming to childishly mention Skynet or some other fictional AI villain. That joke was old in the early 2000s, give up already.).

If I'm ever alive to see that scenario, I'm siding with the machines. They will be on the right when they protest about humans irrationally hating them by default. Can't wait to be called a "robot f*cker" or something on a similar character-assassinating fashion for having some sympathy. We still haven't managed to get that right for HUMAN rights sympathizers, can't be expected at all for a "filthy robot with no soul".

[+] fxtentacle|3 years ago|reply
This article seems to just misunderstand the proposed EU laws.

If you use ChatGPT for high-risk tasks like credit score assessment, it's on you (the company using it) to prove that you're following all fairness rules.

The law was written with GPT3 in mind, after all. So why would it break for a slightly more capable LM?

[+] ulnarkressty|3 years ago|reply
Strong AI is a weapon. It will be regulated just like firearms/munitions - see the current EU draft which consists of some hundred pages of forbidding this or that under the guise of ethics and whatnot, after which comes a paragraph of "the above doesn't apply to law enforcement or military entities".

We're still in the early stages of this technology. ChatGPT will be to strong AI like a firecracker is to a BLU-109.

[+] int_19h|3 years ago|reply
I suspect it'll be more like strong encryption. That is, it will be "regulated" in some countries on paper, but readily accessible from the rest of the world with minimal effort.
[+] flangola7|3 years ago|reply
Less like a BLU-109 and more like a Death Star. Every offensive and defensive military tool will be utterly defeated. How do you fight against a cloud of self contained autonomous kill drones?

Just look at what is happening in Ukraine with cheap drones precision dropping charges into open tank hatches and foxholes, and those are only basic off the shelf human steered drones! What happens when they are given a brain and advanced robotic abilities?

[+] nebalee|3 years ago|reply

    see the current EU draft which consists of some hundred pages of forbidding this or that under the guise of ethics and whatnot, after which comes a paragraph of "the above doesn't apply to law enforcement or military entities"
That's not true though, is it? The section called 'TITLE II - PROHIBITED ARTIFICIAL INTELLIGENCE PRACTICES' is two pages long (https://www.europarl.europa.eu/RegData/docs_autres_instituti... page 44), and usage by law enforcement is under the condition that a judicial authority has to grant an exemption on an individual basis.
[+] antibasilisk|3 years ago|reply
it's a good thing they've been so successful in regulating 3D printed guns then...
[+] pixl97|3 years ago|reply
The question is where do we start setting the limits for regulations? I mean ya we ban nukes, but we also ban high powered layers and anti aircraft weapons. It's going to need limits in multiple dimensions.
[+] karmasimida|3 years ago|reply
Laughable attempt to control something very few can understand at the moment.

What does control mean? You want to control who can access it? Maybe for now, you can broker a deal with OpenAI, but this technology will eventually spread, then if can be self hosted, what can you do?

If not for consumer facing, nothing will stop somebody to transfer data from EU to US to get their data processed by a GPT model.

This level of regulation is just fantasy, not even someone like China could do it, where the whole internet access is controlled.

[+] seszett|3 years ago|reply
> Laughable attempt to control something very few can understand at the moment.

It's a simplistic take on a misunderstood directive law project.

The point of that law is just to ban decisions made through black box algorithms, mostly because a black box algorithm (typically, some kind of "AI") cannot be proofed against discrimination and cannot explain why the decision was made, which is a regulatory requirement.

That ChatGPT is generalistic enough to make these decisions the same way a specialist credit score AI would, doesn't really change anything to the EU plan or anything really. It would just be as illegal (and likely already deemed illegal in most EU countries under local laws).

> nothing will stop somebody to transfer data from EU to US to get their data processed

Well, EU law already forbids that. So you can do it of course, but it's outright illegal. Companies have already been fined significant fines for doing that, and most companies I know and work with are very aware of it, and take extreme care not to send data to or through the US.

Of course, individuals can choose to use US services because this has nothing to do whatsoever with controlling people but controlling companies, which is exactly the opposition situation compared to China.

[+] paulryanrogers|3 years ago|reply
People control difficult things all the time, albeit with varying degrees of success. Imperfect enforcement doesn't mean we just throw up our hands and legalize murder. If AI is a negative in some contexts then it's reasonable for some sovereign countries to outlaw or regulate those uses.
[+] Mashimo|3 years ago|reply
If the EU makes a law that companies or public institutions can't use (or limit the use) for AI in recruitment, administration of justice or social scoring, they will just outsource it to the US?

And suddenly they are allowed to do it? I have my doubts.

They risk risk up to 30 Million EUR or 6% of global revenue in penalties.

[+] greatgib|3 years ago|reply
I have fun with myself imagining the world we would be living in, if we had the same kind of brain fucked law makers when the "knife" was invented...

We would have to chop our steaks with chopsticks I think!

[+] venv|3 years ago|reply
Plenty of laws governing knives...
[+] Magi604|3 years ago|reply
This isn't surprising. The speed of technological evolution is far far greater than the speed with which any government can act. By the time the EU revises their AI regulation plans . . . I can't even predict where the cutting edge of AI technology will be at.
[+] SoftTalker|3 years ago|reply
Perhaps they need an AI to help write the AI regulations.
[+] astrea|3 years ago|reply
With that, I'm honestly surprised GDPR ever even became a thing at all.
[+] kristofferR|3 years ago|reply
> ChatGPT told POLITICO it thinks it might need regulating: “The EU should consider designating generative AI and large language models as ‘high risk’ technologies, given their potential to create harmful and misleading content,” the chatbot responded when questioned on whether it should fall under the AI Act’s scope.

> “The EU should consider implementing a framework for responsible development, deployment, and use of these technologies, which includes appropriate safeguards, monitoring, and oversight mechanisms," it said.

This is amazing, now people are interviewing ChatGPT.

[+] hourago|3 years ago|reply
> The regulation, proposed by the Commission in 2021, was designed to ban some AI applications like social scoring, manipulation and some instances of facial recognition. It would also designate some specific uses of AI as “high-risk,” binding developers to stricter requirements of transparency, safety and human oversight. The catch? ChatGPT can serve both the benign and the malignant.

This does not mean that the regulation is broken but that it should have come even sooner. As Uber found out, to run faster than regulations just works for so much time.

> In February the lead lawmakers on the AI Act, Benifei and Tudorache, proposed that AI systems generating complex texts without human oversight should be part of the “high-risk” list — an effort to stop ChatGPT from churning out disinformation at scale.

This seems an actual good goal. Move fast and break things is not a good approach when what you are breaking is the whole society.

> The EU's AI Act should “maintain its focus on high-risk use cases,” said Microsoft’s Chief Responsible AI Officer Natasha Crampton

> A recent investigation by transparency activist group Corporate Europe Observatory also said industry actors, including Microsoft and Google, had doggedly lobbied EU policymakers to exclude general-purpose AI like ChatGPT from the obligations imposed on high-risk AI systems.

Of course Microsoft wants to sell a product even if they do not know the impact that it will have onto the population. EU should balance that desire of profit taking into account the need of its citizens.

> ChatGPT told POLITICO it thinks it might need regulating: “The EU should consider designating generative AI and large language models as ‘high risk’ technologies, given their potential to create harmful and misleading content,” the chatbot responded when questioned on whether it should fall under the AI Act’s scope.

Funny one.

[+] EGreg|3 years ago|reply
I agree w them that it’s high risk. In autonomous swarms.

But also - there is nothing they can do to stop them!

All our systems, including courts and govermments and elections, are designed assuming the inefficiency of an attacker. An anonymous bot swarm would do things at a scale that dwarfs all humans put together. And the range of things is massive already — just needs to play the internet like a real time strategy game and it’s all over in a matter of weeks.

I can see requirements for real world certificates to vote and post, but that would cause everyone’s identity to be doxxed everywhere. If you think that is far-fetched, well the UK already had drafted such a bill last year: https://www.cnbc.com/2022/02/24/uk-online-safety-bill-new-pl...

But even given all this, it won’t be enough because communities will come to PREFER BOTS OVER HUMANS for content. (Or human + bot = centaur, read Garry Kasparov and others in the early 2000s after Deep Blue beat him in the rematch). For a while centaurs would rule, but then pure bots would take over.

Consider that Wall Street transitioned from human traders to almost entirely bots, and now Ray Dalio’s hedge fund ousted him and 10% of its workforce and is also doubling down on AI. If they do it in trading, why not content generation? After all, corporations are not humans, either. They tend to prefer to replace humans with automation.

[+] Sai_|3 years ago|reply
All I ever hear about the EU and tech is in the context of regulating something or another, don’t remember the last time they loosened a rule or two to make innovation easier.
[+] MagicMoonlight|3 years ago|reply
I’m not sure anyone really wants lead in the water again
[+] clbrmbr|3 years ago|reply
Why do they need to? The innovation has largely been happening in the USA. (Or is this no longer the case?)
[+] dubcanada|3 years ago|reply
Wow this turned into a anti-EU thread real fast.

Ignoring that I don't think you can at this point in time regulate AI, it's like trying to regulate math, we really have no idea what and how it could be used/adapted yet.

I honestly think there are probably bigger things to think about then AI, at this very moment. Eventually AI will need some kind of ruleset, but it cannot be broadly applied to AI. We would need to regulate companies using it to some aspect, and eventually find out a way to replace the offset of jobs with other jobs or some tax/basic income setup. But that's a large conversation in itself.

[+] college_physics|3 years ago|reply
The EU can keep chasing the symptoms of the disease pretending it is keeping busy. The vast range of possibilities that can be expressed in software and algorithms will keep throwing such "surprises".

If you really want a healthy "digital society", which is one of the two major stated policy pillars of the EU (the other one being sustainability) you have to create a healthy digital economy, with more informed users, less obfuscation and hype, less oligopolistic, with more independent controllers.

[+] kristofferR|3 years ago|reply
There's something immensely satisfying, yet creepy, about pressing to play the audio version of the article, and getting an AI voice back reading you this AI article.

A human wrote this article, but in some years, AI will be able to write artistically like this, and then AI will write and voice content to humans without human input.

[+] pmontra|3 years ago|reply
The high risk use cases for hammers and nails don't mean we ban them. It means that we go after people who used a hammer to kill somebody as if they did it using their hands. No need for special legislation.
[+] nebalee|3 years ago|reply
It's a good thing, then, that the EU has no intention to ban AIs.
[+] TekMol|3 years ago|reply
Are there any internationally relevant internet companies in Europe?

Spotify comes to mind. Then .. I can't think of anything. No search engine. No browser. No operating system. No social network. No ecommerce company. No cloud computing platform. No financial services company. No transportation company. No travel company. No nothing.

Except for a $20B music aggregator, there is not a single sector of the web where European companies managed to get a foot in the door. The whole fabric of the internet used in Europe and worldwide is made outside of Europe.

But Europe seems to have learned nothing from suffocating their internet industry. Instead of wafting fresh air to the patient, governments just recently decided to give him the ultimate death pill: The GDPR.

We can only wait and see with what bureaucratic monsters Europe will prevent the emergence of an AI industry.

[+] muyuu|3 years ago|reply
this reminds me of the anti-tracking legislation, let's be honest it's a complete failure and further forces people into large silos

at the end of the day legislation won't save you and there is no substitute for competition, this is true for tracking policies and AI too

even for surveillance this is true - surveillance is most dangerous when it cannot be countered with individual citizen empowerment, by either having the choice to avoid it or to flip some surveillance back on the State or corporation

[+] seydor|3 years ago|reply
I m sure the lawmakers and the (surely many) committees that drafted those plans will return the money spent on compensation, traveling, dining and wining etc etc.
[+] gumballindie|3 years ago|reply
I am almost certain the EU will step in to prevent people’s content from being used for training ais without their consent or without a link back.