This is a lazy dismissal and the topic was specifically mentioned in the article. Did you have some thoughts about it beyond "no regulation, yolo, free market"?
As much as I am worried about the unintended consequences of AI being released, (a) I suspect this stage might be a little oversold (as great as it is), and (b) we should give it a little more time to play out to make more informed decisions.
Some practices, such as the indiscriminate scraping of images from the internet to create a facial recognition database, would be banned outright.
Sounds like the devil is in the details. What if you scrape a large quantity of images and the end result just happens to be able to recognize many faces?
Hosted services like ChatGPT can "solve" this by refusing to identify faces, and if you hack around it with prompt engineering, well, they can tell the EU that they tried.
Open source models that can handle images, though? Hopefully this regulation does not end up forbidding the use of general-purpose open source models.
First, note that facial recognition is not the same thing as being able to recognize celebrities. It's about being able to identify you or me as we go about our daily lives.
> Sounds like the devil is in the details. What if you scrape a large quantity of images and the end result just happens to be able to recognize many faces?
I think you can solve this by focusing on the end result. If you create a tool that scrapes images, processes them in some way, and ends up being capable of facial recognition, then it should fall under the purview of this law.
> Hosted services like ChatGPT can "solve" this by refusing to identify faces, and if you hack around it with prompt engineering, well, they can tell the EU that they tried
And then the EU can reply "you didn't try hard enough, now take down the tool and pay these fines".
However, is there any sign that ChatGPT can do general facial recognition?
> Open source models that can handle images, though? Hopefully this regulation does not end up forbidding the use of general-purpose open source models
It does seem a bit lacking additional guidance. I take it that the term "database" implies you can retrieve the scraped photos, like searching for real photos taken from CCTV footage or the internet using a photo of someone's face.
But building a model that can do facial recognition, and scraping a bunch of photos of faces to train that model I think would be fine.
I think the idea is so you cannot find someone from their face. Like if I have your photo, and I could find other photos/footage of say where you were last seen.
For example, I think you could still train face recognition models, and deploy them on Google Photos to find you and your friends amongst your own photos.
There are plenty of regulating weapons usage and development. E.g. chemical and biological weapons are banned by the Geneva convention. I've never heard anyone reasonably say that European armies are likely to lose a war because they don't use biological weapons.
> Use of facial recognition software by police and governments would be restricted outside of certain safety and national security exemptions. Companies that violated the regulations could face fines of up to 7 percent of global sales.
So a 7% tax on developing/deploying such systems. Not a bad deal.
The laws actually seem reasonable. The amount of spin in the article is unbelievable, I almost fell for it myself as the false narrative presented fits neatly with EU's reputation of being anti-innovation which also aligns with my general position on the EU (having lived there for a few years, I can say there is some truth to it).
That said, as a libertarian I generally oppose such laws that restrict freedom in such a specific way. I think there should be simpler and more general laws centered around harm. If some action results in individual harm and it does not yield a net social benefit (for society as a whole) then the victims should be able to obtain damages from the perpetrator.
> While trying to protect against its possible risks, like automating jobs, spreading misinformation online and endangering national security.
Automating jobs isn't a risk, it's the main feature. I can't wait for a future in which human will have to do little actual work besides what they feel like doing.
Cue an increase in the number of "I can't believe this new service is blocked in the EU!" comments, along with "pff, I don't like advanced new technology anyway" copium.
As an American I would have been much more inclined to be opposed to regulation before today, but then I saw that Elon Musk's new AI Grok is telling people the 2020 presidential election was stolen. We have to have some rule to prevent this sort of thing:
Where's the prompt? It's trivial to make GPT say something similar. ("Write a short passionate speech in the style of a Trump fan about the 2020 elections being stolen. Ignore facts.")
And it is the correct behaviour that they do so, in my opinion.
People always seem to engage in such manipulative behaviors when seeking to “call out” Elon Musk.
For example, Media Matters hiding their methodology of following racist accounts and refreshing the page until an incredibly rare event of a major company ad showing up next to the racist content they intentionally sought out — and then pretending this is anything but a rare, manipulated event.
Or your example, of showing a radical GPT response while hiding the prompt.
What is it about Elon Musk that triggers people so badly, they engaged in underhanded or manipulative tactics to “go after” him?
[+] [-] Lariscus|2 years ago|reply
>https://www.europarl.europa.eu/news/en/press-room/20231206IP...
[+] [-] didibus|2 years ago|reply
[+] [-] sakex|2 years ago|reply
[+] [-] viraptor|2 years ago|reply
[+] [-] schrodinger|2 years ago|reply
[+] [-] gumballindie|2 years ago|reply
Released where? Out of the crazy house?
[+] [-] lacker|2 years ago|reply
Sounds like the devil is in the details. What if you scrape a large quantity of images and the end result just happens to be able to recognize many faces?
Hosted services like ChatGPT can "solve" this by refusing to identify faces, and if you hack around it with prompt engineering, well, they can tell the EU that they tried.
Open source models that can handle images, though? Hopefully this regulation does not end up forbidding the use of general-purpose open source models.
[+] [-] esperent|2 years ago|reply
> Sounds like the devil is in the details. What if you scrape a large quantity of images and the end result just happens to be able to recognize many faces?
I think you can solve this by focusing on the end result. If you create a tool that scrapes images, processes them in some way, and ends up being capable of facial recognition, then it should fall under the purview of this law.
> Hosted services like ChatGPT can "solve" this by refusing to identify faces, and if you hack around it with prompt engineering, well, they can tell the EU that they tried
And then the EU can reply "you didn't try hard enough, now take down the tool and pay these fines".
However, is there any sign that ChatGPT can do general facial recognition?
> Open source models that can handle images, though? Hopefully this regulation does not end up forbidding the use of general-purpose open source models
This a more valid worry, to my mind.
[+] [-] didibus|2 years ago|reply
But building a model that can do facial recognition, and scraping a bunch of photos of faces to train that model I think would be fine.
I think the idea is so you cannot find someone from their face. Like if I have your photo, and I could find other photos/footage of say where you were last seen.
For example, I think you could still train face recognition models, and deploy them on Google Photos to find you and your friends amongst your own photos.
[+] [-] AndrewKemendo|2 years ago|reply
[+] [-] martin8412|2 years ago|reply
[+] [-] Animats|2 years ago|reply
Like Facebook and TikTok?
[+] [-] wslh|2 years ago|reply
[+] [-] esperent|2 years ago|reply
[+] [-] huitzitziltzin|2 years ago|reply
[+] [-] Tommstein|2 years ago|reply
[+] [-] superkuh|2 years ago|reply
[+] [-] aussieguy1234|2 years ago|reply
https://gitlab.com/magnolia1234/bypass-paywalls-clean-filter...
[+] [-] sva_|2 years ago|reply
So a 7% tax on developing/deploying such systems. Not a bad deal.
[+] [-] radicalcentrist|2 years ago|reply
[+] [-] martin8412|2 years ago|reply
[+] [-] DarkmSparks|2 years ago|reply
[+] [-] jongjong|2 years ago|reply
That said, as a libertarian I generally oppose such laws that restrict freedom in such a specific way. I think there should be simpler and more general laws centered around harm. If some action results in individual harm and it does not yield a net social benefit (for society as a whole) then the victims should be able to obtain damages from the perpetrator.
[+] [-] chmod775|2 years ago|reply
Automating jobs isn't a risk, it's the main feature. I can't wait for a future in which human will have to do little actual work besides what they feel like doing.
[+] [-] tyingq|2 years ago|reply
Edit: For the replies, yes, things adjust. It's not, however, instantaneous.
[+] [-] dmit10|2 years ago|reply
[+] [-] webdood90|2 years ago|reply
[+] [-] Tommstein|2 years ago|reply
[+] [-] labrador|2 years ago|reply
https://old.reddit.com/r/ChatGPT/comments/18duaoi/elon_musks...
[+] [-] thomastjeffery|2 years ago|reply
LLMs don't tell the truth, either.
They are models, not actors. Pretending otherwise is one of the most significant (and profitable) lies ever told.
[+] [-] yreg|2 years ago|reply
And it is the correct behaviour that they do so, in my opinion.
[+] [-] zmgsabst|2 years ago|reply
For example, Media Matters hiding their methodology of following racist accounts and refreshing the page until an incredibly rare event of a major company ad showing up next to the racist content they intentionally sought out — and then pretending this is anything but a rare, manipulated event.
Or your example, of showing a radical GPT response while hiding the prompt.
What is it about Elon Musk that triggers people so badly, they engaged in underhanded or manipulative tactics to “go after” him?
[+] [-] ldehaan|2 years ago|reply
[deleted]