top | item 35963191

(no title)

kubota | 2 years ago

You lost me at "While AI regulation is important" - nope, congress does not need to regulate AI.

discuss

order

haswell|2 years ago

I’d argue that sweeping categorical statements like this are at the center of the problem.

People are coalescing into “for” and “against” camps, which makes very little sense given the broad spectrum of technologies and problems summarized in statements like “AI regulation”.

I think it’s a bit like saying “software (should|shouldn't) be regulated”. It’s a position that cannot be defended because the term software is too broad.

tessierashpool|2 years ago

"important" does not mean "good." if you are in the field of AI, AI regulation is absolutely important, whether good or bad.

runarberg|2 years ago

If AI is to be a consumer good—which it already is—it needs to be regulated, at the very least to ensure equal quality to a diverse set of customers and other users. Unregulated there is high risk of people being affected by e.g. employers and landlords using AI to discriminate. Or you being sold an AI solution which isn’t as advertised.

If AI will be used by public institutions, especially law enforcement, we need it regulated in the same manner. A bad AI trained on biased data has the potential to be extremely dangerous in the hands of a cop who is already predisposed for racist behavior.

candiodari|2 years ago

AI is being used as a consumer good, including to discriminate:

https://www.smh.com.au/national/nsw/maximise-profits-facial-...

AI is being used by law enforcement and public institutions. In fact so much that perhaps this is a good link:

https://www.monster.com/jobs/search?q=artificial+intelligenc...

In both cases it's too late to do anything about it. AI is "loose". Oh and I don't know if you noticed, governments have collectively decided law doesn't apply to them, only to their citizens, and only in a negative way. For instance, just about every country has laws on the books guaranteeing timely emergency care at hospitals, with timely defined as within 1 or 2 hours.

Waiting times are 8-10 hours (going up to days) and this is the normal situation now, it's not a New Year's eve or even Friday evening thing anymore. You have the "right" to less waiting time, which can only mean the government (the worst hospitals are public ones) should be forced to fix this, spending whatever it needs to to fix it. And it can be fixed, I mean at this point you'd have to give physicians and nurses a 50% rise and double the number employed and 10x the number in training.

Government is just outright not doing this, and if one thing's guaranteed, this will keep getting worse, a direct violation of your rights in most states, for the next 10 years minimum, but probably longer.

laratied|2 years ago

If if someone doesn't agree with this, regulate what exactly?

Does scikit-learn count or we are just not going to bother defining what we mean by "AI"?

"AI" is whatever congress says it is? That is an absolutely terrible idea.

wnevets|2 years ago

> nope, congress does not need to regulate AI.

Not regulating the air quality we breathe for decades turned out amazing for millions of the Americas. Yes, lets do the same with AI! What could possibility go wrong?

pizza|2 years ago

I think this is a great argument in the opposite direction.. atoms matter, information isn’t. A small group of people subjugated many others to poisonous matter. That matter affected their bodies and a causal link could be made.

Even if you really believe that somewhere in the chain of consequences derived from LLMs there could be grave and material damage or other affronts to human dignity, there is almost always a more direct causal link that acts as the thing which makes that damage kinetic and physical. And that’s the proper locus for regulation. Otherwise this is all just a bit reminiscent of banning numbers and research into numbers.

Want to protect people’s employment? Just do that! Enshrine it in law. Want to improve the safety of critical infrastructure and make sure they’re reliable? Again, just do that! Want to prevent mass surveillance? Do that! Want to protect against a lack of oversight in complex systems allowing for subterfuge via bad actors? Well, make regulation about proper standards of oversight and human accountability. AI doesn’t obviate human responsibility, and a lack of responsibility on the part of humans who should’ve been responsible, and who instead cut corners, doesn’t mean that the blame falls on the tool that cut the corners, but rather the corner-cutters themselves.