top | item 47098998

(no title)

AxiomLab | 8 days ago

Imposing a strict, discrete topology—like a tree or a DAG—is the only viable way to build reliable systems on top of LLMs.

If you leave agent interaction unconstrained, the probabilistic variance compounds into chaos. By encapsulating non-deterministic nodes within a rigidly defined graph structure, you regain control over the state machine. Coordination requires deterministic boundaries.

discuss

order

bizzletk|8 days ago

The article addresses this:

> This made sense when agents were unreliable. You’d never let GPT-3 decide how to decompose a project. But current models are good at planning. They break problems into subproblems naturally. They understand dependencies. They know when a task is too big for one pass.

> So why are we still hardcoding the decomposition?

4b11b4|8 days ago

Sure, decomposition is already in the pre-training corpus. and then we can do some "instruction-tuning" on top. This is fine for the last mile, but that's it. I would consider this unaddressed and after with the root comment.

devonkelley|6 days ago

Honest question, have you actually run a DAG-based agent system past like 8 nodes? In my experience the topology isn't what gives you reliability, it's the error handling at each node. You can have the most beautiful DAG in the world and it still falls apart when node 3 returns something subtly wrong and nodes 4-8 all proceed as if everything's fine. The structure gives you the illusion of control.