top | item 46936554

(no title)

albert_e | 21 days ago

On the other hand ... I recently had to deal with official Microsoft Support for an Azure service degradation / silent failure.

Their email responses were broadly all like this -- fully drafted by GPT. The only thing i liked about that whole exchange was that GPT was readily willing to concede that all the details and observations I included point to a service degradation and failure on Microsoft side. A purely human mind would not have so readily conceded the point without some hedging or dilly-dallying or keeping some options open to avoid accepting blame.

discuss

order

datsci_est_2015|21 days ago

> The only thing i liked about that whole exchange was that GPT was readily willing to concede that all the details and observations I included point to a service degradation and failure on Microsoft side.

Reminds me of an interaction I was forced to have with a chatbot over the phone for “customer service”. It kept apologizing, saying “I’m sorry to hear that.” in response to my issues.

The thing is, it wasn’t sorry to hear that. AI is incapable of feeling “sorry” about anything. It’s anthropomorphisizing itself and aping politeness. I might as well have a “Sorry” button on my desk that I smash every time a corporation worth $TRILL wrongs me. Insert South Park “We’re sorry” meme.

Are you sure “readily willing to concede” is worth absolutely anything as a user or consumer?

wat10000|21 days ago

Better than actual human customer agents who give an obviously scripted “I’m sorry about that” when you explain a problem. At least the computer isn’t being forced to lie to me.

We need a law that forces management to be regularly exposed to their own customer service.

shiandow|21 days ago

> Are you sure “readily willing to concede” is worth absolutely anything as a user or consumer?

The company can't have it both ways. Either they have to admit the ai "support" is bollocks, or they are culpable. Either way they are in the wrong.

szundi|21 days ago

[deleted]