(no title)
chaboud | 5 days ago
(I'll admit that I've built a few "applications" exploring interaction descriptions with our Design team that do exactly this - but they were design explorations that, in effect, used the LLM to simulate a back-end. Glorious, but not shippable.)
ryancoleman|5 days ago
falcor84|5 days ago
mtw14|5 days ago
chaboud|3 days ago
1. Confirmable, predictable behavior (can we test it, can we make assurances to customers?).
2. Comparative performance (having an LLM call to extract from a list in 100s of ms instead of code in <10ms).
3. Operating costs. LLM calls are spendy. Just think of them as hyper-unoptimized lossy function executors (along with being lossy encyclopedias), and the work starts to approach bogo algorithm levels of execution cost for some small problems.
Buuuuuut.... I had working functional prototype explorations with almost no work on my end, in an hour.
We've now extended this thinking to some experience exploration builders, so it definitely has a place in the toolbox.