(no title)
ClintEhrlich | 13 days ago
I'd like to quickly summarize what is different about our approach and why it matters.
Our work was inspired by brilliant research done at MIT CSAIL on "Recursive Language Models" (RLMs). One of the controversies has been whether these models are just a formalization of what agents like Claude Code already do vs. whether they bring new capabilities to the table.
By outperforming Claude on the major long-context benchmark, we provide a strong signal that something fundamentally new is happening. (In other words, it's not "just Claude Code" because it demonstrably outperforms Claude Code in the long-context regime.)
Where our contribution, LCM, differs from RLMs is how we handle recursion. RLMs use "symbolic recursion" -- i.e., they have an LLM write a script to recursively call itself in order to manipulate the context, which is stored in a REPL. This provides maximum flexibility... but it often goes wrong, since the LLM may write imperfect scripts.
LCM attempts to decompose the recursion from RLMs into deterministic primitives so that the control flow can be managed by an engine rather than left to the whims of the LLM. In practice, this means we replace bespoke scripts with two mechanisms: (1) A DAG-based context management system that works like paged virtual memory, except for managing conversations and files; and (2) Operator-level recursion, like "Map" for LLMs, which lets one tool call process thousands of tasks.
An analogy we draw in the paper is the evolution from GO-TO statements (of Dijkstra's "Considered Harmful" fame) to structured programming. RLMs are maximally expressive, but all of that power comes with the risk of things going awry. We have built a more mechanistic system, which can provide stronger guarantees when deployed in production with today's models.
Happy to answer any questions! Thanks for taking a look at the paper!
jorl17|13 days ago
I've echoed the sentiment here on HN (and elsewhere) that these kinds of mechanisms seem to be a pathway to extending context longer and longer and longer and I wish I could toy around with this technology right now (can I?). I'm so excited!!
Your work is the shoulders-built-on-shoulders upon which other giants shall keep on building. Thank you so much.
ClintEhrlich|13 days ago
Yes, we think there is a ton of low-hanging fruit from taking lessons from OS/PL theory and applying them to LLM tooling.
This is our first contribution in that direction. There will be more!
vessenes|13 days ago
Have you considered making an openclaw plugin/PR for it? I understand you have your own coding CLI tool, but I don’t think this looks so hard to implement that it can’t be implemented elsewhere.
Either way, thanks for sharing this.
ClintEhrlich|13 days ago
We have heard from a ton of OpenClaw users that the biggest barrier to them getting everything they want out of their agents is that memory is not a solved problem.
LCM could be a great solution to that. Stay tuned -- will ship it ASAP.
quotemstr|13 days ago
> deterministic primitives
Are agent-map and LLM-map the only two options you've given the model for recursive invocations? No higher-level, er, reduction operators to augment the map primitives?
belisarius222|13 days ago
It's quite possible that's wrong. If so, I would write llm_reduce like this: it would spawn a sub-task for every pair of elements in the list, which would call an LLM with a prompt telling it how to combine the two elements into one. The output type of the reduce operation would need to be the same as the input type, just like in normal map/reduce. This allows for a tree of operations to be performed, where the reduction is run log(n) times, resulting in a single value.
That value should probably be loaded into the LCM database by default, rather than putting it directly into the model's context, to protect the invariant that the model should be able to string together arbitrarily long sequences of maps and reduces without filling up its own context.
I don't think this would be hard to write. It would reuse the same database and parallelism machinery that llm_map and agentic_map use.
flashgordon|11 days ago
On an unrelated note - I did notice you were also a lawyer - So umm how what is next for you re this? What should we gear up for :)