top | item 36353652

(no title)

Simon321 | 2 years ago

Seems to EU is determined to cripple their AI industry at all costs, we already have so little technology companies...

Foundation models are labelled as 'high risk'!

In my opinion this is way too premature... this would cripple open source AI as well...

> While the act includes open source exceptions for traditional machine learning models, it expressly forbids safe-harbor provisions for open source generative systems.

>Any model made available in the EU, without first passing extensive, and expensive, licensing, would subject companies to massive fines of the greater of €20,000,000 or 4% of worldwide revenue. Opensource developers, and hosting services such as GitHub – as importers – would be liable for making unlicensed models available.

>Open Source LLMs Not Exempt: Open source foundational models are not exempt from the act. The programmers and distributors of the software have legal liability. For other forms of open source AI software, liability shifts to the group employing the software or bringing it to market. (pg 70).

Source: https://technomancers.ai/eu-ai-act-to-target-us-open-source-...

While i'm usually pro-EU they are really overreacting here and the consequences for our economy of crippling a technology with so much potential will be enormous in the long run.

discuss

order

supermatt|2 years ago

That article is pure FUD.

ALL the regulatory activity mentioned in the article is related to "high-risk" AI systems, which are specifically:

- where the AI is part of a safety system, and where that safety system already needs to undergo conformity assessment.

OR

- where the AI system poses a significant risk of harm to the health, safety or fundamental rights of natural persons - for very specific use-cases

https://www.europarl.europa.eu/resources/library/media/20230... (p122-125)

hdkrgr|2 years ago

it goes further than that. the technomancers blogpost gets a lot of the actual requirements completely wrong (for example the supposed requirement for third-party or government "licensing". Which is nowhere in the Act).

What really frustrated me about this whole discussion is seeing some SV heavyweights quoting this article uncritically and screaming about how stupid the EU is again, while referring to supposed requirements that are nowhere to be found in the act. I would assume these people have access to the best information in the world, yet they don't seem to have had any of their staff actually read the draft. :/

FWIW, I quickly wrote up some of my thoughts about what the technomancer's article gets wrong at the time, but then didn't get around to polish and publish them. If you're interested, here are my notes: https://gist.github.com/heidekrueger/bdee0268ecdad5f6b56f557...

Edit: I want to emphasize that I DO share some of the concerns that the blogpost raises about the current draft of the act. I just wish we could have a meaningful discussion about it rather than namecalling and fearmongering.

Simon321|2 years ago

If this wouldn't apply to these foundation models then why would they write an article on how they currently comply or not?

> We assess the compliance of 10 foundation model providers—and their flagship foundation models—with 12 of the Act’s requirements for foundation models

The whole point of this article is to see what would apply to theses models!

RobotToaster|2 years ago

>where the AI system poses a significant risk of harm to the health

We've seen pushes from both sides to redefine anything they disagree with as harmful to mental health.

v7n|2 years ago

In my opinion it's about time to clarify rules and standards for automated systems that can for example kill someone if their output is incorrect.

Simon321|2 years ago

ChatGPT and stable diffusion are killing people now? I think incorrect output of regular software is actually killing people but we don't need a license to write code.

mrtksn|2 years ago

EU crippling innovation through regulation is a know mem3 but can you actually name regulation, which crippled innovation?

It's usually the Americans who are freaking out over this, I assume based on their local experience. It’s the same thing about the unions or any other stuff that works completely differently in EU and USA. it cuts both ways, European understanding of the American healthcare system is also a caricature.

wrong analogies also help, like assuming that EU is like the American federal system or that the European law works like the American law.

In the specific case about artificial intelligence, EU is interested in regulating high risk systems, but the online conversation revolves around people freaking out that EU will ban their home grown language model.

Simon321|2 years ago

I am from the EU, not an American. I am not anti regulation in general. An example of where this has happened before was with GMO's:

> In 2006, the World Trade Organization concluded that the EU moratorium, which had been in effect from 1999 to 2004,[12] had violated international trade rules.[13][14]

We had a moratorium for years and even now we have the most stringent GMO regulations in the world. This crippled GMO research in Europe.