(no title)
jerry80 | 2 years ago
That is to say, it not only makes it hard to pin responsibility, but actually makes it no longer a crime at all.
jerry80 | 2 years ago
That is to say, it not only makes it hard to pin responsibility, but actually makes it no longer a crime at all.
AbrahamParangi|2 years ago
Instead I think it would be better for organizations to be approximately as liable for their mistakes as their crimes. In that case it doesn’t matter if an employee does something illegal or an AI does some illegal on behalf of the company, the company will remain liable.
eropple|2 years ago
This doesn't really match how intent is handled in (at least American) law. There are reasonable-person tests. It is subpar, and so I agree with your second paragraph, but it isn't as cut and dry as the first paragraph suggests.
Kranar|2 years ago
For example if you intentionally take someone's property, maybe you took their phone away because you genuinely thought their phone was causing them harm and wanted to help them, you have the mens rea for theft.
However, if you unintentionally took someone's phone, like you mistook someone else's phone for your phone, then you don't have the mens rea for theft.
simonh|2 years ago
I wrote the above before reading the article and wondered how much the author knows about corporate criminal and civil responsibility. I’ve worked for finance institutions so I’ve been through the training on this. A graphic designer. Right, I completely understand and appreciate the problems with generative AI. Those are points well taken.
I mean I’ve got nothing against graphic designers, and I’m not saying there are no risks with AI. There are many. But the risk assessment particularly in finance, and likely other business areas is based on a fundamental misunderstanding of the way regulations on this actually work already today.