top | item 39348854

(no title)

nvrmnd | 2 years ago

This and other proposed legislation is attempting to hit the ball out of the park on the first pitch. I feel it would be a lot more sensible and effective to legislate clear and present harms, such as holding developing firms liable for deep-fake technology if used for identity theft for the purpose of fraud.

discuss

order

ApolloFortyNine|2 years ago

Should I be able to sue Honda because someone in their Civic ran into me?

A users misuse of a technology shouldn't be the responsibility of the developer. You could apply this to almost every product in the world otherwise.

mike_d|2 years ago

Yes, if the Civic had a feature that made it easier to hit you, or lacked a reasonable feature that would have prevented it from hitting you.

We have a long history of legally targeting companies that produce products targeted at criminal activity or implied criminal activity.

wolverine876|2 years ago

If their Civic's brakes were poorly designed or implemented, then yes, Honda should be liable. Then we get into the definition of 'poorly' - in what distance and time should the car stop? - and then we need some sophisticated regulation.

bluGill|2 years ago

If Honda says that the Civic is good for running into people then yes - as this is the clear purpose. Or if Honda says you don't have to worry as it is not possible to hit someone - because they promised that took care of things.

Note that courts take advertising over warning labels and the manual. Which is why many car ads have the text "professional driver on closed track on screen" - make it clear they they think car can do it but not most customers. Likewise cutting tools often have "guards removed for clarity" are clearly not operating (or clearly a cartoon image and not the real tool) - if they advertise someone running the tool without the guard they are liable.

There is also the concept of foreseeable misuse in courts. If you can imagine someone would do that you have do show the courts that isn't the intended purpose and you tried to prevent it. If someone does something you didn't think of, then you need to show the court you put a reasonable effort into figuring out all the possible misuses otherwise it becomes a lack of creativity on your part. Thinking of a misuse doesn't mean you have to make it impossible, just you have to make reasonable effort to ensure that doesn't happen (guards, warning labels, training, not selling to some customers - all are common tactics to sell something that can be misused without being liable, but even there you can't put a warning label on something if you could have placed a guard on the danger)

The above just brushes the surface of what the courts deal with (and different countries have different laws). If you need details talk to a lawyer.

screye|2 years ago

I'm suspicious of this bill, but your analogy does more to show how cars are horrifyingly unregulated than push for individual responsibility.

The car allows you to break the law by going 2x faster than the highest speed limit in the nation. A faster car, with higher ground clearance does make it easier to fatally run into someone. The Tesla cybertruck is a killing machine in car form.

Cars are the leading cause of death in the US. Maybe we need to have a similar 'pre-emptive manufacturer-side intervention' bill for cars too.

thinkingtoilet|2 years ago

If it was found they were reckless, absolutely. I believe this is already the case.

nvrmnd|2 years ago

If there was legislation that required Honda to install certain safety features and they failed to do so, then yes they should be liable.

mullingitover|2 years ago

> I feel it would be a lot more sensible and effective to legislate clear and present harms, such as holding developing firms liable for deep-fake technology if used for identity theft for the purpose of fraud.

s/deep-fake/photoshop

Deepfakes are simply more convenient photo/video/audio editing that has been around for decades[1], and we don't really need new legislation to deal with them. Fraud/defamation/etc, the actual harmful aspects of what can be accomplished with deepfakes, don't need any new updates to handle the technology. If we're going to hobble new technologies, we may as well go back and hold Adobe responsible for all the shady things people have done with Photoshop, and video/audio editing suites for all the deceptive clips people have spliced together.

[1] https://www.youtube.com/watch?v=La5jrfobfTM&t=1s

gary_0|2 years ago

s/photoshop/airbrush

I vaguely recall seeing some fairly convincing B&W Soviet-era photos (I think they had Stalin in them) where people were removed and other people moved around to fill the gap. And document forgery for the purposes of fraud and espionage has of course been around for centuries.

But I think the issue is less the capability itself, and more that companies will make it too easy (trivial, actually) for anyone to commit mischief. The ability to mass-manipulate images on command is no longer restricted to the General Secretary of the USSR.

That doesn't necessarily mean regulation is required, though--plenty of modern technologies make it very easy to commit crimes, but only some of them require special rules.

ajmurmann|2 years ago

I understood the bill to explicitly not target misuse of the AI (from the article: "Odd that ‘a model autonomously engaging in a sustained sequence of unsafe behavior’ only counts as an ‘AI safety incident’ if it is not ‘at the request of a user.’ If a user requests that, aren’t you supposed to ensure the model doesn’t do it? Sounds to me like a safety incident."). This seems to be entirely targeted at potential risk from a rogue AI. What regulation would you propose to address that risk?