top | item 42576258

(no title)

mrieck | 1 year ago

You didn't list the most important reason:

- Assume LLMs will be more intelligent and cheaper, and the cost of switching to a new LLM model is non-existent. How does improving the custom/heuristic compare in that future?

discuss

order

extr|1 year ago

That's kind of what I was getting at in point 2, about "new use cases" opening up, but yeah you stated it more directly. It's hard to argue with. With a heuristic driven approach we know we will need expertise, dev hours, etc to improve the feature. With LLMs, well, some lab out there is basically doing all the hard work for us, all we need to do is sit back and wait for a year or two and then change one line of code, model="gpt-4o" to model="gpt-5o" or whatever.