I'll note that they had a large annotated data set already that they were using to train and evaluate their own models. Once they decided to start testing LLMs it was straightforward for them to say "LLM 1 outperforms LLM 2" or "Prompt 3 outperforms Prompt 4".
I'm afraid that people will draw the wrong conclusion from "We didn’t just replace a model. We replaced a process." and see it as an endorsement of the zero-shot-uber-alles "Prompt and Pray" approach that is dominant in the industry right now and the reason why an overwhelming faction of AI projects fail.
If you can get good enough performance out of zero shot then yeah, zero shot is fine. Thing is that to know it is good enough you still have to collect and annotate more data than most people and organizations want to do.
> Thing is that to know it is good enough you still have to collect and annotate more data than most people and organizations want to do.
This has been the bottleneck in every ML (not just text/LLM) project I’ve been part of.
Not finding the right AI engineers. Not getting the MLops textbook perfect using the latest trends.
It’s the collecting enough high quality data and getting it properly annotated and verified. Then doing proper evals with humans in the loop to get it right.
People who only know these projects through headlines and podcasts really don’t like to accept this idea. Everyone wants synthetic data with LLMs doing the annotations and evals because they’ve been sold this idea that the AI will do everything for you, you just need to use it right. Then layer on top of that the idea that the LLMs can also write the code for you and it’s a mess when you have to deal with people who only gain their AI knowledge through headlines, LinkedIn posts, and podcasts.
I would offer a stronger more pointed observation, ofen the problem in building a good classifier is having good negative examples. More generally how a classifier identify good negatives is a function of:
1. Data collection technique.
2. Data annotation(labelling).
3. Classfier can learn on your "good" negatives — quantitaively depending on the machine residuals/margin/contrastive/triplet losses — i.e. learn the difference between a negative and positive for a classifier at train time and the optimization minima is higher than at test time.
4. Calibration/Reranking and other Post Processing.
My guess is that they hit a sweet spot with the first 3 techniques.
- text classification, not text generation
- operating on existing unstructured input
- existing solution was extremely limited (string matching)
- comparing LLM to similar but older methods of using neural networks to match
- seemingly no negative consequences to warranty customers themselves of mis-classification (the data is used to improve process, not to make decisions)
Which is good because a lot of such matching and ML use cases for products I’ve worked on at several companies fit into this. The problem I’ve seen is when decision making capabilities are inferred from/conflated with text classification and sentiment analysis.
In my current role this seems like a very interesting approach to keep up with pop culture references and internet speak that can change as quickly as it takes the small ML team I work with to train or re-train a model. The limit is not a tech limitation, it’s a person-hours and data labeling problem like this one.
Given I have some people on my team that like to explore this area I’m going to see if I can run a similar case study to this one to see if it’s actually a fit.
Edit: At the risk of being self deprecating and reductive: I’d say a lot of products I’ve worked on are profitable/meaningful versions of Silicon Valley’s Hot Dog/Not Hot Dog.
I agree with you that the headline really needs to be qualified with these details. So there's an aspect of being unsurprising here, because that particular set of details is exactly where LLMs perform very well.
But I think it's still an interesting result, because related and similar tasks are everywhere in our modern world, and they tend to have high importance in both business and the public sector, and the older generation of machine learning techniques for handling these tasks we're both sophisticated and to the point where very capable and experienced practitioners might need an R&D cycle just to conclude if the problem was solvable with the available data up to the desired standard.
LLM's represent a tremendous advancement in our ability as a society to deal with these kinds of tasks. So yes, it's a limited range of specific tasks, and success is found within a limited set of criteria, but it's a very important tasks and enough of those criteria are met in practice that I think this result is interesting and generalizable.
That doesn't mean we should fire all of our data scientists and let junior programmers just have at it with the LLM, because you still need to put together a good day to say, makes sense of the results, and iterate intelligently, especially given that these models tend to be expensive to run. It does however mean that existing data teams must be open to adopting LLMs instead of traditional model fitting.
It didn't stick out to me because "corporate success story" articles already tend to sound like that, which is at least in part where I imagine the popular LLMs get it from. (The other part being pop nonfiction books.)
> That line sticks out so much now, and I can't unsee it.
I thought maybe they did it on purpose at first, like a cheeky but too subtle joke about LLM usage, but when it happened twice near the end of the post I just acknowledged, yeah, they did the thing. At least it was at the end or I might have stopped reading way earlier.
I dunno, ending with a short, punchy insight is a common way to make an impactful conclusion. It's the equivalent of a "hook" for concluding an article instead of opening. I do it often and see others (e.g. OpEds) use that tactic all the time.
I think we're getting into reverse slop discrimination territory now. LLMs have been trained on so much of what we consider "good writing", that actual good writing is now attributed by default to LLMs.
Dunnow, reads fine to me, also seems we now have a #nothingisreal problem now where everything is AI. Given that LLMS were trained on pre-existing writing it follows that people commonly write like that.
overall I think things have gotten better. I noticed maybe 3 years before chatGPT hit the scene that I would frequent on a page that definitely didn't seem written by a native English speaker. The writing was just weird. I see less of that former style now.
Probably the biggest new trend I notice is this very prominent "Conclusion" block that seems to show up now.
Honestly I'd love to see some data on it. I suspect a lot of "that's LLM slop" isn't and others isn't noticed and lots of LLM tropes were rife within online content long before LLMs but we're now hypersensitive to certain things since they're overused by LLMs.
Warranty data is a great example of where LLMs have evolved bureaucratic data overhead. What most people do not know is because of US federal TREAD regulation Automotive companies (If they want to land and look at warranty data) need to review all warranty claims, document, and detect any safety related issues and issue recalls all with an strong auditability requirement. This problem generates huge data and operations overhead, Companies need to either hire 10's if not hundreds of individuals to inspect claims or come up with automation to make this process easier.
Over the past couple of years people have made attempts with NLP (lets say standard ML workflows) but NLP and word temperature scores are hard to integrate into a reliable data pipeline much less a operational review workflow.
Enter LLM's, the world is a data gurus oyster for building an detection system on warranty claims. Passing data to Prompted LLM's means capturing and classifying records becomes significantly easier, and these data applications can flow into more normal analytic work streams.
People generally sleep when you start talking about fine-tuned BERT and CLIP, although they do a fairly decent job as long as you have good data and know what you're doing.
But no, they want to pay $0.1 per request to recognize if a photo has a person in it by asking a multimodal LLM deployed across 8x GPUs, for some reason, instead of just spending some hours with CLIP and run it effectively even on CPU.
I love using encoder models, and they are generally a better technology for this kind of application. But the price of GPU instances is too damn high.
I won’t lie that I’ve been unreasonably annoyed that I have to use a lot more compute than I need, for no other reason than an LLM API exists and it’s good enough in a relatively small throughput application.
> Over multiple years, we built a supervised pipeline that worked. In 6 rounds of prompting, we matched it. That’s the headline, but it’s not the point. The real shift is that classification is no longer gated by data availability, annotation cycles, or pipeline engineering.
It's worth highlighting the conditions under which this can help:
> in domains where the taxonomy drifts, the data is scarce, or the requirements shift faster than you can annotate
It's not actually clear if warranty claims really meet these criteria.
For warranty claims, the difficulty is in detecting false negatives, when companies have a strong incentive and opportunity to hide the negatives.
Companies have been trusted to do this kind of market surveillance (auto warranties, drug post-market reporting) largely based on faith that the people involved would do so in earnest. That faith is misplaced when the process is automated (not because the implementors are less diligent, but because they are too removed to tell).
Then the backlash to a few significant injuries might be a much worse regime of bureaucratic oversight, right when companies have replaced knowledge with automation (and replacement labor costs are high).
I thought the same. Having said that, the parenthesis in the example are really wrong for what they were trying to convey. I suspect that they built this sql sample for the document and made some mistakes in its generation.
Perhaps I could say, it isn't just generated--it is also hallucinated!
And this is where the strengths of LLMs really lie: making performant ML available to a wider audience, without requiring PHDs in Computer Science or Mathematics to build. It’s consistently where I spend my time tinkering with these, albeit in a local-only environment.
If all the bullshit hype and marketing would evaporate already (“LLMs will replace all jobs!”), stuff like this would float to the top more and companies with large data sets would almost certainly be clamoring for drop-in analysis solutions based on prompt construction. They’d likely be far happier with the results, too, instead of fielding complaints from workers about it (AI) being rammed down their throats at every turn.
Honda probably spends $100-$10,000 on a warranty claim in terms of technician time and parts. [1] Even at the low end they can afford to spend 10 cents on an LLM to analyze a claim.
Running big LLMs is expensive, but not nearly expensive as hiring people. Employees are very expensive, well beyond their wages that you see. Everything from employment taxes (employer paid) to hiring additional people to manage the people and their HR needs.
Even if it took $10 to run everything to handle each request, that’s far cheaper than even a minimum wage employee when you consider all of the employment overhead.
It could have been done via topic analysis without an LLM.
In fact there are companies such as Medallia which specialize in CX and have really strong classification solutions for specifically these use cases (plus all the generative AI stuff for closing the loop).
The topic modeling of every major vendor, mostly awful LDA implementations, is horrendous. On the order of +/-20% absolute percentage points per topic. It would make my life easier if it weren’t so shit. As is, in every customer we have to go in and do legitimate topic modeling and taxonimization.
Their AI implementations are also awful. Just sample 100 contacts with someone who actually understands the business and see their reaction.
“cut-chip” usually describes a fault where the engine cuts out briefly—as if someone flicked the ignition off for a split second—and the driver hears or feels a little “chip” or sharp interruption in power.
So you're still ignoring that problem of putting the oil filter in places that cause excessive spilled oil? The point of artificial intelligence is to quickly have an unbiased check of all signal within the noise.
intuitively it has seemed that these kinds of "fuzzy text search" applications are an area where llms really shine. it's cool to see evidence of it working.
i'm curious about some kind of notion of "prompt overfitting." it's good to see the plots of improvement as the prompts change (although error bars probably would make sense here), but there's not much mention of hold out sets or other approaches to mitigate those concerns.
It's the Amazon own model. I'm baffled someone would pick it, even more that someone would test Llama 4 for a task in an age where Sonnet 4.5 is already out, so in the last 45 days.
Looks like they were limited by AWS Bedrock options.
And yet, the source problem still remains. The company has a shitty way of reporting quality issues in relation to parts and assemblies.
Being an automaker, I can almost smell the silos where data resides, the rigidly defended lines between manufactures, sales and post-sales, the intra-departmental political fights.
Then you have all the legacy of enterprise software.
And the result is this shitty warranty claims data.
As someone that also worked at a large automakers, I think you’re making large, unfounded assumptions.
Warranty data flows up from the technicians - good luck getting any auto technician to properly tag data. Their job is to fix a specific customer’s problem, not identify systematic issues.
There’s a million things that make the data inherently messy. For example, a technician might replace 5 parts before they finally identify the root cause.
Therefore, you need some sort of department to sit between millions of raw claims and engineering. I would be curious what kind of alternative you have in mind?
> We tried multiple vectorization and classification approaches. Our data was heavily imbalanced and skewed towards negative cases. We found that TF-IDF with 1-gram features paired with XGBoost consistently emerged as the winner.
Yeah I’m curious if they tried training a Bert or similar classifier… intuitively this seems better than tfidf which is throwing away a ton of information.
PaulHoule|3 months ago
I'm afraid that people will draw the wrong conclusion from "We didn’t just replace a model. We replaced a process." and see it as an endorsement of the zero-shot-uber-alles "Prompt and Pray" approach that is dominant in the industry right now and the reason why an overwhelming faction of AI projects fail.
If you can get good enough performance out of zero shot then yeah, zero shot is fine. Thing is that to know it is good enough you still have to collect and annotate more data than most people and organizations want to do.
Aurornis|3 months ago
This has been the bottleneck in every ML (not just text/LLM) project I’ve been part of.
Not finding the right AI engineers. Not getting the MLops textbook perfect using the latest trends.
It’s the collecting enough high quality data and getting it properly annotated and verified. Then doing proper evals with humans in the loop to get it right.
People who only know these projects through headlines and podcasts really don’t like to accept this idea. Everyone wants synthetic data with LLMs doing the annotations and evals because they’ve been sold this idea that the AI will do everything for you, you just need to use it right. Then layer on top of that the idea that the LLMs can also write the code for you and it’s a mess when you have to deal with people who only gain their AI knowledge through headlines, LinkedIn posts, and podcasts.
scrame|3 months ago
An overwhelming amount of software projects fail, AI just helps them get there faster.
ghm2180|3 months ago
1. Data collection technique.
2. Data annotation(labelling).
3. Classfier can learn on your "good" negatives — quantitaively depending on the machine residuals/margin/contrastive/triplet losses — i.e. learn the difference between a negative and positive for a classifier at train time and the optimization minima is higher than at test time.
4. Calibration/Reranking and other Post Processing.
My guess is that they hit a sweet spot with the first 3 techniques.
pjc50|3 months ago
Moto7451|3 months ago
In my current role this seems like a very interesting approach to keep up with pop culture references and internet speak that can change as quickly as it takes the small ML team I work with to train or re-train a model. The limit is not a tech limitation, it’s a person-hours and data labeling problem like this one.
Given I have some people on my team that like to explore this area I’m going to see if I can run a similar case study to this one to see if it’s actually a fit.
Edit: At the risk of being self deprecating and reductive: I’d say a lot of products I’ve worked on are profitable/meaningful versions of Silicon Valley’s Hot Dog/Not Hot Dog.
nerdponx|3 months ago
But I think it's still an interesting result, because related and similar tasks are everywhere in our modern world, and they tend to have high importance in both business and the public sector, and the older generation of machine learning techniques for handling these tasks we're both sophisticated and to the point where very capable and experienced practitioners might need an R&D cycle just to conclude if the problem was solvable with the available data up to the desired standard.
LLM's represent a tremendous advancement in our ability as a society to deal with these kinds of tasks. So yes, it's a limited range of specific tasks, and success is found within a limited set of criteria, but it's a very important tasks and enough of those criteria are met in practice that I think this result is interesting and generalizable.
That doesn't mean we should fire all of our data scientists and let junior programmers just have at it with the LLM, because you still need to put together a good day to say, makes sense of the results, and iterate intelligently, especially given that these models tend to be expensive to run. It does however mean that existing data teams must be open to adopting LLMs instead of traditional model fitting.
jwong_|3 months ago
> We didn’t just replace a model. We replaced a process.
That line sticks out so much now, and I can't unsee it.
prasoonds|3 months ago
> That’s not a marginal improvement; it’s a different way of building classifiers.
They've replaced an em-dash with a semi-colon.
nerdponx|3 months ago
magicalist|3 months ago
I thought maybe they did it on purpose at first, like a cheeky but too subtle joke about LLM usage, but when it happened twice near the end of the post I just acknowledged, yeah, they did the thing. At least it was at the end or I might have stopped reading way earlier.
keeda|3 months ago
I think we're getting into reverse slop discrimination territory now. LLMs have been trained on so much of what we consider "good writing", that actual good writing is now attributed by default to LLMs.
ieie3366|3 months ago
[deleted]
serjester|3 months ago
Aniket-N|3 months ago
It’s not X it’s Y. We didn’t just do A we did B.
There’s definitely a lot of hard work that has gone in here. It’s gotten hard to read because of these sentence patterns popping up everywhere.
ticulatedspline|3 months ago
overall I think things have gotten better. I noticed maybe 3 years before chatGPT hit the scene that I would frequent on a page that definitely didn't seem written by a native English speaker. The writing was just weird. I see less of that former style now.
Probably the biggest new trend I notice is this very prominent "Conclusion" block that seems to show up now.
Honestly I'd love to see some data on it. I suspect a lot of "that's LLM slop" isn't and others isn't noticed and lots of LLM tropes were rife within online content long before LLMs but we're now hypersensitive to certain things since they're overused by LLMs.
tengbretson|3 months ago
StefanBatory|3 months ago
At the same time, as a nonnative speaker of English, this is literally how we were taught to write eye-catching articles and phrases. :P
A lot of formulaic writing is what we were taught to do, especially with more formal things. (This is more of a sidenote to this example)
So in a hunt for LLMs, we also get hit.
kazinator|3 months ago
(Even ironically sometimes observed in cases when the writing is disparaging of AI and the use of AI).
If the subject matter is AI, you should instantly pay attention and look for the signs it was AI assisted or generated outright.
chanux|3 months ago
... Wait a minute!
Der_Einzige|3 months ago
https://arxiv.org/abs/2510.15061
unknown|3 months ago
[deleted]
unknown|3 months ago
[deleted]
datax2|3 months ago
Over the past couple of years people have made attempts with NLP (lets say standard ML workflows) but NLP and word temperature scores are hard to integrate into a reliable data pipeline much less a operational review workflow.
Enter LLM's, the world is a data gurus oyster for building an detection system on warranty claims. Passing data to Prompted LLM's means capturing and classifying records becomes significantly easier, and these data applications can flow into more normal analytic work streams.
stogot|3 months ago
“ Fun fact: Translating French and Spanish claims into German first improved technical accuracy—an unexpected perk of Germany’s automotive dominance.”
lfx|3 months ago
Does it make text more clear? How exactly? Does the German language is more descriptive? Does it somehow expands context?
So many questions in this fun fact.
happimess|3 months ago
Given that it was inside a 9-step text preprocessing pipeline, it would be surprising if the AI had that much autonomy.
killerstorm|3 months ago
embedding-shape|3 months ago
But no, they want to pay $0.1 per request to recognize if a photo has a person in it by asking a multimodal LLM deployed across 8x GPUs, for some reason, instead of just spending some hours with CLIP and run it effectively even on CPU.
deepsquirrelnet|3 months ago
I won’t lie that I’ve been unreasonably annoyed that I have to use a lot more compute than I need, for no other reason than an LLM API exists and it’s good enough in a relatively small throughput application.
elzbardico|3 months ago
pards|3 months ago
w10-1|3 months ago
> in domains where the taxonomy drifts, the data is scarce, or the requirements shift faster than you can annotate
It's not actually clear if warranty claims really meet these criteria.
For warranty claims, the difficulty is in detecting false negatives, when companies have a strong incentive and opportunity to hide the negatives.
Companies have been trusted to do this kind of market surveillance (auto warranties, drug post-market reporting) largely based on faith that the people involved would do so in earnest. That faith is misplaced when the process is automated (not because the implementors are less diligent, but because they are too removed to tell).
Then the backlash to a few significant injuries might be a much worse regime of bureaucratic oversight, right when companies have replaced knowledge with automation (and replacement labor costs are high).
mcdonje|3 months ago
The text says, "...no leaks..." The case statement says, "...AND LOWER(claim_text) NOT LIKE '%no leak%...'"
It would've properly been marked as a "0".
unknown|3 months ago
[deleted]
_ea1k|3 months ago
Perhaps I could say, it isn't just generated--it is also hallucinated!
stego-tech|3 months ago
If all the bullshit hype and marketing would evaporate already (“LLMs will replace all jobs!”), stuff like this would float to the top more and companies with large data sets would almost certainly be clamoring for drop-in analysis solutions based on prompt construction. They’d likely be far happier with the results, too, instead of fielding complaints from workers about it (AI) being rammed down their throats at every turn.
Veliladon|3 months ago
djoldman|3 months ago
* "2 years vs 1 month" is a bit misleading because the work that enabled testing the 1 month of prompting was part of the 2 years of ML work.
* xgboost is an ensemble method... add the llm outputs as inputs to xgboost and probably enjoy better results.
* vectorize all the text data points using an embedding model and add those as inputs to xgboost for probably better results.
esafak|3 months ago
PaulHoule|3 months ago
[1] specifically https://www.warrantyweek.com/archive/ww20230817.html claims the expectation value of warranty claims for a car is around $650.
Aurornis|3 months ago
Even if it took $10 to run everything to handle each request, that’s far cheaper than even a minimum wage employee when you consider all of the employment overhead.
greazy|3 months ago
> Translating French and Spanish claims into German first improved technical accuracy—an unexpected perk of Germany’s automotive dominance.
It brings up an interesting idea that some languages are better suited for different domains.
juancn|3 months ago
In fact there are companies such as Medallia which specialize in CX and have really strong classification solutions for specifically these use cases (plus all the generative AI stuff for closing the loop).
nostrebored|3 months ago
Their AI implementations are also awful. Just sample 100 contacts with someone who actually understands the business and see their reaction.
robocat|3 months ago
1970-01-01|3 months ago
a-dub|3 months ago
i'm curious about some kind of notion of "prompt overfitting." it's good to see the plots of improvement as the prompts change (although error bars probably would make sense here), but there's not much mention of hold out sets or other approaches to mitigate those concerns.
elmigranto|3 months ago
suddenlybananas|3 months ago
NumberCruncher|3 months ago
Workaccount2|3 months ago
Upvoter33|3 months ago
xfalcox|3 months ago
Looks like they were limited by AWS Bedrock options.
swyx|3 months ago
elzbardico|3 months ago
Being an automaker, I can almost smell the silos where data resides, the rigidly defended lines between manufactures, sales and post-sales, the intra-departmental political fights.
Then you have all the legacy of enterprise software.
And the result is this shitty warranty claims data.
serjester|3 months ago
Warranty data flows up from the technicians - good luck getting any auto technician to properly tag data. Their job is to fix a specific customer’s problem, not identify systematic issues.
There’s a million things that make the data inherently messy. For example, a technician might replace 5 parts before they finally identify the root cause.
Therefore, you need some sort of department to sit between millions of raw claims and engineering. I would be curious what kind of alternative you have in mind?
drob518|3 months ago
shubham_zingle|3 months ago
cwmoore|3 months ago
yahoozoo|3 months ago
davidsainez|3 months ago
andy99|3 months ago
suriya-ganesh|3 months ago
LLMs still beat a clarifier, because they're able to extract more signals than a text embedding.
It's very difficult to beat an LLM + prompt in terms of semantic extraction.
Gormanu|3 months ago
[deleted]
DeathArrow|3 months ago
lmz|3 months ago