top | item 38575801

E.U. Agrees on Artificial Intelligence Rules with Landmark New Law

90 points| localhost | 2 years ago |nytimes.com

91 comments

order
[+] sakex|2 years ago|reply
To be sure we're not going to lose yet another technological race, we're just not going to participate.
[+] viraptor|2 years ago|reply
This is a lazy dismissal and the topic was specifically mentioned in the article. Did you have some thoughts about it beyond "no regulation, yolo, free market"?
[+] schrodinger|2 years ago|reply
As much as I am worried about the unintended consequences of AI being released, (a) I suspect this stage might be a little oversold (as great as it is), and (b) we should give it a little more time to play out to make more informed decisions.
[+] gumballindie|2 years ago|reply
> of AI being release,

Released where? Out of the crazy house?

[+] lacker|2 years ago|reply
Some practices, such as the indiscriminate scraping of images from the internet to create a facial recognition database, would be banned outright.

Sounds like the devil is in the details. What if you scrape a large quantity of images and the end result just happens to be able to recognize many faces?

Hosted services like ChatGPT can "solve" this by refusing to identify faces, and if you hack around it with prompt engineering, well, they can tell the EU that they tried.

Open source models that can handle images, though? Hopefully this regulation does not end up forbidding the use of general-purpose open source models.

[+] esperent|2 years ago|reply
First, note that facial recognition is not the same thing as being able to recognize celebrities. It's about being able to identify you or me as we go about our daily lives.

> Sounds like the devil is in the details. What if you scrape a large quantity of images and the end result just happens to be able to recognize many faces?

I think you can solve this by focusing on the end result. If you create a tool that scrapes images, processes them in some way, and ends up being capable of facial recognition, then it should fall under the purview of this law.

> Hosted services like ChatGPT can "solve" this by refusing to identify faces, and if you hack around it with prompt engineering, well, they can tell the EU that they tried

And then the EU can reply "you didn't try hard enough, now take down the tool and pay these fines".

However, is there any sign that ChatGPT can do general facial recognition?

> Open source models that can handle images, though? Hopefully this regulation does not end up forbidding the use of general-purpose open source models

This a more valid worry, to my mind.

[+] didibus|2 years ago|reply
It does seem a bit lacking additional guidance. I take it that the term "database" implies you can retrieve the scraped photos, like searching for real photos taken from CCTV footage or the internet using a photo of someone's face.

But building a model that can do facial recognition, and scraping a bunch of photos of faces to train that model I think would be fine.

I think the idea is so you cannot find someone from their face. Like if I have your photo, and I could find other photos/footage of say where you were last seen.

For example, I think you could still train face recognition models, and deploy them on Google Photos to find you and your friends amongst your own photos.

[+] AndrewKemendo|2 years ago|reply
Is it your opinion that since it’s possible to subvert it they should abandon the attempt?
[+] martin8412|2 years ago|reply
All EU countries except for Ireland use civil law. The spirit of the law trumps whatever legal foolery you try to pull.
[+] Animats|2 years ago|reply
> AI systems that manipulate human behaviour to circumvent their free will;

Like Facebook and TikTok?

[+] wslh|2 years ago|reply
This is like preventing to develop weapons, at the end if you don't have them you will lose the war. It is suicide. Others will do.
[+] esperent|2 years ago|reply
There are plenty of regulating weapons usage and development. E.g. chemical and biological weapons are banned by the Geneva convention. I've never heard anyone reasonably say that European armies are likely to lose a war because they don't use biological weapons.
[+] huitzitziltzin|2 years ago|reply
It’s not clear to me the regulators understand what risks they are actually mitigating, if any.
[+] Tommstein|2 years ago|reply
The risk of missing an opportunity to invent new reasons to fine American companies.
[+] sva_|2 years ago|reply
> Use of facial recognition software by police and governments would be restricted outside of certain safety and national security exemptions. Companies that violated the regulations could face fines of up to 7 percent of global sales.

So a 7% tax on developing/deploying such systems. Not a bad deal.

[+] radicalcentrist|2 years ago|reply
Sounds like 7% of revenue rather than profit. But I agree it seems weird to cap it at 7.
[+] martin8412|2 years ago|reply
7% per infraction. If you don't show signs of trying to immediately rectify it, they can keep fining you the 7%.
[+] DarkmSparks|2 years ago|reply
It seems no one told the EU that once AI works it is not called AI anymore.
[+] jongjong|2 years ago|reply
The laws actually seem reasonable. The amount of spin in the article is unbelievable, I almost fell for it myself as the false narrative presented fits neatly with EU's reputation of being anti-innovation which also aligns with my general position on the EU (having lived there for a few years, I can say there is some truth to it).

That said, as a libertarian I generally oppose such laws that restrict freedom in such a specific way. I think there should be simpler and more general laws centered around harm. If some action results in individual harm and it does not yield a net social benefit (for society as a whole) then the victims should be able to obtain damages from the perpetrator.

[+] chmod775|2 years ago|reply
> While trying to protect against its possible risks, like automating jobs, spreading misinformation online and endangering national security.

Automating jobs isn't a risk, it's the main feature. I can't wait for a future in which human will have to do little actual work besides what they feel like doing.

[+] tyingq|2 years ago|reply
I get the feeling that the plan to keep that from just being lots of people in poverty takes longer to evolve than the AI does to take the jobs.

Edit: For the replies, yes, things adjust. It's not, however, instantaneous.

[+] dmit10|2 years ago|reply
It wouldn't be a risk at all, if AI learned to help to create new businesses (being a founder's copilot) sooner than it learns to automate jobs.
[+] webdood90|2 years ago|reply
I struggle to understand how anyone really believes this is a real outcome of automation. I think you're being entirely too optimistic.
[+] Tommstein|2 years ago|reply
Cue an increase in the number of "I can't believe this new service is blocked in the EU!" comments, along with "pff, I don't like advanced new technology anyway" copium.
[+] labrador|2 years ago|reply
As an American I would have been much more inclined to be opposed to regulation before today, but then I saw that Elon Musk's new AI Grok is telling people the 2020 presidential election was stolen. We have to have some rule to prevent this sort of thing:

https://old.reddit.com/r/ChatGPT/comments/18duaoi/elon_musks...

[+] thomastjeffery|2 years ago|reply
LLMs don't lie.

LLMs don't tell the truth, either.

They are models, not actors. Pretending otherwise is one of the most significant (and profitable) lies ever told.

[+] yreg|2 years ago|reply
Where's the prompt? It's trivial to make GPT say something similar. ("Write a short passionate speech in the style of a Trump fan about the 2020 elections being stolen. Ignore facts.")

And it is the correct behaviour that they do so, in my opinion.

[+] zmgsabst|2 years ago|reply
People always seem to engage in such manipulative behaviors when seeking to “call out” Elon Musk.

For example, Media Matters hiding their methodology of following racist accounts and refreshing the page until an incredibly rare event of a major company ad showing up next to the racist content they intentionally sought out — and then pretending this is anything but a rare, manipulated event.

Or your example, of showing a radical GPT response while hiding the prompt.

What is it about Elon Musk that triggers people so badly, they engaged in underhanded or manipulative tactics to “go after” him?