(no title)
doorhammer | 6 months ago
Two things _would_ surprise me, though:
- That they'd integrate it into any meaningful process without having done actual analysis of the LLM based perf vs their existing tech
- That they'd integrate the LLM into a core process their department is judged on knowing it was substantially worse when they could find a less impactful place to sneak it in
I'm not saying those are impossible realities. I've certainly known call center senior management to make more hairbrained decisions than that, but barring more insight I personally default to assuming OP isn't among the hairbrained.
shortrounddev2|6 months ago
Instead of doing any of those (we have the infrastructure to do it) we are paying OpenAI for their embeddings APIs. Perhaps openAI is just doing old school ML under the hood but there is definitely an instinct among product managers to reach for shiny tools from shiny companies instead of considering more conservative options
doorhammer|6 months ago
I think for me, the way the GP phrased things just made me want to give them the benefit of the doubt.
Given my experience, people I've worked with, and how the GP phrased things, in my mind it's more likely than not that their not making a naive "chase-the-AI" decision, and that a lot of replies didn't have a whole lot of call center experience.
The department I worked with when I did work in call centers was particularly competent and also pretty org savvy. Decisions were always a mix of pragmatism and optics. I don't think it's hard to find people like that in most companies. I also don't think it's hard to find the opposite.
But yeah, when I say something would be surprising, I don't mean it's impossible. I mean that the GP sounds informed and competent, and if I assume that, it'd be surprising to me if they sacrificed long-term success for an immediate boost by slotting LLMs into something so core to their success metrics.
But, I could be wrong. It's just my hunch, not a quantitative analysis or anything. Feature factory product influence is a real thing, for sure. It's why the _main_ question I ask in interviews is for everyone to describe the relationship between product and eng, so I definitely self-select toward a specific dynamic that probably unduly influences my perspective. I've been places where the balance is hard product, and it sucks working somewhere like that.
But yeah, for deciding if more standard ML techniques are worth replacing with LLMs, I'd ultimately need to see actual numbers from someone concretely comparing the two approaches. I just don't have that context