top | item 47083688

(no title)

elashri | 9 days ago

When a corporate does something good, a lot of executives and people inside will go and claim credit and will demand/take bounces.

If something bad happened against any laws, even if someone got killed, we don't see them in jail.

I don't defend both positions, I am just saying that is not far from how the current legal framework works.

discuss

order

eru|9 days ago

> If something bad happened against any laws, even if someone got killed, we don't see them in jail.

We do! In many jurisdictions, there are lots of laws that pierce the corporate veil.

cj|9 days ago

its surprisingly easy to get away with murder (literally and figuratively) without piercing the corporate veil if you understand the rules of the game. Running decisions through a good law firm also “helps” a lot.

https://en.wikipedia.org/wiki/Piercing_the_corporate_veil

kingstnap|9 days ago

Well the important concept missing there that makes everything sort of make sense is due diligence.

If your company screws up and it is found out that you didn't do your due diligence then the liability does pass through.

We just need to figure out a due diligence framework for running bots that makes sense. But right now that's hard to do because Agentic robots that didn't completely suck are just a few months old.

gostsamo|9 days ago

No, it isnot hard. You are 100% responsible for the actions of your AI. Rather simple, I say.

hvb2|9 days ago

> If your company screws up and it is found out that you didn't do your due diligence then the liability does pass through.

In theory, sure. Do you know many examples? I think, worst case, someone being fired is the more likely outcome

jacquesm|9 days ago

It's easy: your bot: your liability.

jacquesm|9 days ago

Hence:

> It's externalization on the personal level

Instead of the corporate level.