(no title)
powera
|
24 days ago
There's a difference between the chatbot "advertising" something and an hour-long manipulative conversation getting the chatbot to make up a fake discount code. Based on the OP's comments, if it was a human employee who gave the fake code they could plausibly claim duress.
acdha|24 days ago
Replacing the employee with a rental robot doesn’t change that: the business is expected to handle training and recover losses due to not following that training under their rental contract. If the robot can’t be trained and the manufacturer won’t indemnify the user for losses, then it’s simply not fit for purpose.
This is the fundamental problem blocking adoption of LLMs in many areas: they can’t reason and prompt injection is an unsolved problem. Until there are some theoretical breakthroughs, they’re unsafe to put into adversarial contexts where their output isn’t closely reviewed by a human who can be held accountable. Companies might be able to avoid paying damages in court if a chatbot is very clearly labeled as not not to be trusted, but that’s most of the market because companies want to lay off customer service reps. There’s very little demand for purely entertainment chatbots, especially since even there you have reputational risks if someone can get it to make a racist joke or something similarly offensive.
unknown|24 days ago
[deleted]
szszrk|24 days ago
If that "difference" is so obvious to you (and you expect it will break at some point), why don't you demand the company to notice that problem as well? And simply.. not put bogus mechanism in place, at all.
Edit: to be clear. I think company should just cancel and apologize. And then take down that bot, or put better safeguards (good luck with that).
hshdhdhj4444|24 days ago