top | item 46955575

(no title)

halayli | 20 days ago

Maybe I missed it but I don't see them defining what they mean by ethics. Ethics/morals are subjective and changes dynamically over time. Companies have no business trying to define what is ethical and what isn't due to conflict of interest. The elephant in the room is not being addressed here.

discuss

order

spacebanana7|20 days ago

Especially as most AI safety concerns are essentially political, and uncensored LLMs exist anyway for people who want to do crazy stuff like having a go at building their own nuclear submarine or rewriting their git history with emoji only commit messages.

For corporate safety it makes sense that models resist saying silly things, but it's okay for that to be a superficial layer that power users can prompt their way around.

gmerc|20 days ago

Ah the classic Silicon Valley "as long as someone could disagree, don't bother us with regulation, it's hard".

sciencejerk|20 days ago

Often abbreviated to simply "Regulation is hard." Or "Security is hard"

voidhorse|20 days ago

Your water supply definitely wants ethical companies.

nradov|20 days ago

Ethics are all well and good but I would prefer to have quantified limits for water quality with strict enforcement and heavy penalties for violations.

alex43578|20 days ago

Is it ethical for a water company to shutoff water to a poor immigrant family because of non-payment? Depending on the AI's political and DEI-bend, you're going to get totally different answers. Having people judge an AI's response is also going to be influenced by the evaluator's personal bias.

afavour|20 days ago

I understand the point you’re making but I think there’s a real danger of that logic enabling the shrugging of shoulders in the face of immoral behavior.

It’s notable that, no matter exactly where you draw the line on morality, different AI agents perform very differently.