top | item 42929656

(no title)

cabalamat | 1 year ago

> and that may exhibit adaptiveness after deployment

So if an AI can't change its weights after deployment, it's not really an AI? That doesn't make sense.

As for the other criteria, they're so vague I think a thermostat might apply.

discuss

order

stubish|1 year ago

Keyword 'may'.

A learning thermostat would apply, say one that uses historical records to predict changes in temperature and preemptively adjusts. And it would be low risk and unregulated in most cases. But attach to a self-heating crib or premature baby incubator and that would jump to high risk and you might have to prove it is safe.

butlike|1 year ago

So if the thermostat jumps to 105 during the night, that's not considered 'high-risk?'

sofixa|1 year ago

> As for the other criteria, they're so vague I think a thermostat might apply.

As long as the thermostat doesn't control people's lives, that's fine.

logifail|1 year ago

> they're so vague I think a thermostat might apply

Quite.

One wonders if the people who came up with this have any actual understanding of the technology they're attempting to regulate.

zelphirkalt|1 year ago

It _may_ exhibit adaptiveness after deployment, which would not change it being AI. I think that is the right reading of the definition.