(no title)
DaveMcMartin | 1 year ago
I think it’s more likely that companies would adhere to EU regulations and use the same model everywhere or implement some kind of filter.
DaveMcMartin | 1 year ago
I think it’s more likely that companies would adhere to EU regulations and use the same model everywhere or implement some kind of filter.
drakonka|1 year ago
When I attended a conference about this I remember the distinction between "Provider" and "Deployer" being discussed. Providers are manufacturers developing a tool, deployers are professional users making a service available using the tool. A deployer may deploy a provided AI tool/model in a way that falls within the definition of unacceptable risk, and it is (also) the deployer's responsibility to ensure compliance.
The example given was of a university using AI for grading. The university is a deployer, and it is their responsibility to conduct a rights impact assessment before deploying the tool to its internal users.
This was compared to normal EU-style product safety regulation, which is directed at the manufacturer (what would be the provider here): if you make a stuffed toy, don’t put in such and such chemicals, etc. Here, the _application_ of the tool is under scrutiny as well vs just the tool itself. Note that this is based on very hasty notes[0] from the talk - I'm not sure to what extent the provider vs deployer responsibility divide is actually codified in the act.
[0] https://liza.io/ai-act-conference-2024-keynote-notes-navigat...