top | item 42931400

(no title)

DaveMcMartin | 1 year ago

Does this only apply to usage, or does it include training the model as well? Training a model is extremely expensive, and it’s hard to imagine a company investing a huge amount of money to develop two different models just to comply with regulations (though maybe it’s worth it, I’m just guessing here).

I think it’s more likely that companies would adhere to EU regulations and use the same model everywhere or implement some kind of filter.

discuss

order

drakonka|1 year ago

Not a lawyer.

When I attended a conference about this I remember the distinction between "Provider" and "Deployer" being discussed. Providers are manufacturers developing a tool, deployers are professional users making a service available using the tool. A deployer may deploy a provided AI tool/model in a way that falls within the definition of unacceptable risk, and it is (also) the deployer's responsibility to ensure compliance.

The example given was of a university using AI for grading. The university is a deployer, and it is their responsibility to conduct a rights impact assessment before deploying the tool to its internal users.

This was compared to normal EU-style product safety regulation, which is directed at the manufacturer (what would be the provider here): if you make a stuffed toy, don’t put in such and such chemicals, etc. Here, the _application_ of the tool is under scrutiny as well vs just the tool itself. Note that this is based on very hasty notes[0] from the talk - I'm not sure to what extent the provider vs deployer responsibility divide is actually codified in the act.

[0] https://liza.io/ai-act-conference-2024-keynote-notes-navigat...