top | item 44060595

(no title)

qu0b | 9 months ago

Great summary of the trade-offs in Agentic systems. We’ve tackled these exact challenges as we built out our conversational product discovery product for e-commerce at IsarTech [0].

I agree function composition and structured data are essential for keeping complexity in check. In our experience, well-defined structured outputs are the real scalability lever in tool calling. Typed schemas keep both cognitive load and system complexity manageable. We rely on deterministic behavior wherever possible, and reserve LLM processing for cases where schema-less data or ambiguity is involved. Its a great tool for mapping fuzzy user requests to a more structured deterministic system.

That said, finding the right balance between taking complexity out of high entropy input or introducing complexity through chained tool calling is a tradeoff and balance that needs to be struck carefully. In real-world commerce settings, you rarely get away with just one approach. Structured outputs are great until you hit ambiguous intents—then things get messy and you need fallback strategies.

[0] https://isartech.io/

discuss

order

jacob019|9 months ago

Ambiguity must be explicitly handled like uncertainty in predictive modeling, that can be challenging. I run into trouble with task complexity. At a certain point even the best models start making dumb mistakes, and it's tough to draw the line for decomposing tasks. Role playing to induce planning and reflection helps, but I feel that upper bound. I've noticed that the model performance declines when using constrained outputs. Last year I would go to all this trouble decomposing tasks in ways that seem silly given the current models. At the pace that things are moving, I expect to see models soon that can handle 10x complexity and 10mb context, I just hope I can afford to use them.