top | item 46706981

(no title)

ofabioroma | 1 month ago

~20ms overhead. Built on Cloudflare's edge, so it's fast globally. The value isn't speed—it's that you stop rebuilding context infrastructure and get versioning/debugging for free.

discuss

order

nicklo|1 month ago

i've always wondered (for this, portkey, etc) - why not have a parallel option that fires an extra request instead of mitm the llm call?

ofabioroma|1 month ago

You can fire them in parallel for simple cases. The issue is when you have multi-agent setups. If context isn't persisted before a sub-agent reads it, you get stale state. Single source of truth matters when agents are reading and writing to the same context.

For single-agent flows, parallel works fine.