top | item 46724162

(no title)

eightysixfour | 1 month ago

> Out of all of the different things these agents can do, surely most forms of "routine" customer support are the lowest hanging fruit?

I come from a world where customer support is a significant expense for operations and everyone was SO excited to implement AI for this. It doesn't work particularly well and shows a profound gap between what people think working in customer service is like and how fucking hard it actually is.

Honestly, AI is better at replacing the cost of upper-middle management and executives than it is the customer service problems.

discuss

order

swiftcoder|1 month ago

> shows a profound gap between what people think working in customer service is like and how fucking hard it actually is

Nicely fitting the pattern where everyone who is bullish on AI seems to think that everyone else's specialty is ripe for AI takeover (but not my specialty! my field is special/unique!)

eightysixfour|1 month ago

I was closer to upper-middle management and executives, it could have done the things I did (consultant to those people) and that they did.

It couldn't/shouldn't be responsible for the people management aspect but the decisions and planning? Honestly, no problem.

pixl97|1 month ago

As someone who does support I think the end result looks a lot different.

AI, for a lot of support questions works quite well and does solve lots of problems in almost every field that needs support. The issue is this commonly removes the roadblocks from your users being cautious to doing something incredibly stupid that needs support to understand what they hell they've actually done. Kind of a Jeavons Paradox of support resources.

AI/LLMs also seem to be very good at pulling out information on trends in support and what needs to be sent for devs to work on. There are practical tests you can perform on datasets to see if it would be effective for your workloads.

The company I work at did an experiment on looking at past tickets in a quarterly range and predicting which issues would generate the most tickets in the next quarter and which issues should be addressed. In testing the AI did as well or better than the predictions we had made that the time and called out a number of things we deemed less important that had large impacts in the future.

0xferruccio|1 month ago

to be fair at least half of the software engineers i know are facing some level of existential crisis when seeing how well claude code works, and what it means for their job in the long term

and these are people are not junior developers working on trivial apps

pinkmuffinere|1 month ago

Perhaps even more-so given the following tagline, "Honestly, AI is better at replacing the cost of upper-middle management and executives than it is the customer service problems", lol. I suppose it's possible eightysixfour is an upper-middle management executive though.

Terr_|1 month ago

> bullish [...] but not my specialty

IMO we can augment this criticism by asking which tasks the technology was demoed on that made them so excited in the first place, and how much of their own job is doing those same tasks--even if they don't want to admit it.

__________

1. "To evaluate these tools, I shall apply them to composing meeting memos and skimming lots of incoming e-mails."

2. "Wow! Look at them go! This is the Next Big Thing for the whole industry."

3. "Concerned? Me? Nah, memos and e-mails are things everybody does just as much as I do, right? My real job is Leadership!"

4. "Anyway, this is gonna be huge for replacing staff that have easier jobs like diagnosing customer problems. A dozen of them are a bigger expense than just one of me anyway."

nostrebored|1 month ago

We're working on this problem at large enterprises, handling complex calls (20+ minutes). I think the only reason we have any success is because the majority of the engineering team has been a customer support rep before.

Every company we talk to has been told "if you just connect openai to a knowledgebase, you can solve 80% of calls." Which is ridiculous.

The amount of work that goes in to getting any sort of automation live is huge. We often burn a billion tokens before ever taking a call for a customer. And as far as we can tell, there are no real frameworks that are tackling the problem in a reasonable way, so everything needs to be built in house.

Then, people treat customer support like everything is an open-and-shut interaction, and ignore the remaining company that operates around the support calls and actually fulfills expectations. Seeing other CX AI launches makes me wonder if the companies are even talking to contact center leaders.

danielbln|1 month ago

There are some solid usecases for AI in support, like document/inquiry triage and categorization, entity extraction, even the dreaded chatbots can be made to not be frustrating, and voice as well. But these things also need to be implemented with customer support stakeholders that are on board, not just pushed down the gullet by top brass.

eightysixfour|1 month ago

Yes but no. Do you know how many people call support in legacy industries, ignore the voice prompt, and demand to speak to a person to pay their recurring, same-cost-every-month bill? It is honestly shocking.

There are legitimate support cases that could be made better with AI but just getting to them is honestly harder than I thought when I was first exposed. It will be a while.

hn_acc1|1 month ago

>Honestly, AI is better at replacing the cost of upper-middle management and executives than it is the customer service problems.

Sure, but when the power of decision making rests with that group of people, you have to market it as "replace your engineers". Imagine engineers trying to convince management to license "AI that will replace large chunks of management"?