top | item 47139134

(no title)

chaboud | 5 days ago

Why stop there? Just call the LLM with the data and function description and get it to return the result!

(I'll admit that I've built a few "applications" exploring interaction descriptions with our Design team that do exactly this - but they were design explorations that, in effect, used the LLM to simulate a back-end. Glorious, but not shippable.)

discuss

order

ryancoleman|5 days ago

That's basically how it works! (with human authored functions that validate the result, automatically providing feedback to the LLM if needed)

falcor84|5 days ago

Because you often need the result not as a standalone artifact, but as a piece in a rigid process, consisting with well-defined business logic and control flow, with which you can't trust AI yet.

mtw14|5 days ago

What was the gap you discovered that made it not shippable? This is an experimental project, so I'm curious to know what sorts of problems you ran into when you tried a similar approach.

chaboud|3 days ago

Three things:

1. Confirmable, predictable behavior (can we test it, can we make assurances to customers?).

2. Comparative performance (having an LLM call to extract from a list in 100s of ms instead of code in <10ms).

3. Operating costs. LLM calls are spendy. Just think of them as hyper-unoptimized lossy function executors (along with being lossy encyclopedias), and the work starts to approach bogo algorithm levels of execution cost for some small problems.

Buuuuuut.... I had working functional prototype explorations with almost no work on my end, in an hour.

We've now extended this thinking to some experience exploration builders, so it definitely has a place in the toolbox.