top | item 47038813

(no title)

0x696C6961 | 13 days ago

In what world is this simpler than just giving the agent a list of functions it can call?

discuss

order

Mic92|13 days ago

So usually MCP tool calls a sequential and therefore waste a lot of tokens. There is some research from Antrophic (I think there was also some blog post from cloudflare) on how code sandboxes are actually a more efficient interface for llm agents because they are really good at writing code and combining multiple "calls" into one piece of code. Another data point is that code is more deterministic and reliable so you reduce the hallucination of llms.

foota|13 days ago

What do the calls being sequential have to do with tokens? Do you just mean that the LLM has to think everytime they get a response (as opposed to being able to compose them)?

dvt|13 days ago

Who implements those functions? E.g., store.order has to have its logic somewhere.

0x696C6961|10 days ago

Those functions usually already exist, you just write light wrappers around them for the LLM.