(no title)
chaboud | 3 days ago
1. Confirmable, predictable behavior (can we test it, can we make assurances to customers?).
2. Comparative performance (having an LLM call to extract from a list in 100s of ms instead of code in <10ms).
3. Operating costs. LLM calls are spendy. Just think of them as hyper-unoptimized lossy function executors (along with being lossy encyclopedias), and the work starts to approach bogo algorithm levels of execution cost for some small problems.
Buuuuuut.... I had working functional prototype explorations with almost no work on my end, in an hour.
We've now extended this thinking to some experience exploration builders, so it definitely has a place in the toolbox.
No comments yet.