top | item 36354031

(no title)

hdkrgr | 2 years ago

Since there's some confusion about this:

- The AI Act regulates both 'high-risk AI systems' and 'foundation models' and applies different requirements for them.

- 'foundation models' are essentially defined in the act as "very large scale and expensive generative ai models that will probably only be offered via API" (my words). The reason the act wants to regulate them is so that USERs of foundation models have a chance to make their downstream use case complaint if that use case is high-risk. For example, if I'm a health insurance provider and I'm using a chatbot enabled by GPT4 in my health insurance sign-up flow, then my system may be high-risk and needs to be compliant. I need access to some information aobut GPT4 (e.g. expected error modes, potential biases etc) to do that.

- The wording of the act makes a point of highlighting that your run-off-the mill open source generative AI project will not constitute a 'foundation model'. The exact scale at which a project will become a regulated 'foundation model' is not yet clear, but it can be assumed that it will be at least tens of millions of dollars. If you can spend that much on compute an researchers, I think you can spend a few k on becoming compliant.

- The technomancers article confuses requirements for High-risk systems with those for foundation models. (It also gets some of the high-risk requirements completely wrong, but that's another discussion.)

- The stanford HAL website does a great job with the facts! I really value seeing thoughtful contributions to the discussion like theirs. (Especially from an American institution!)

discuss

order

No comments yet.