(no title)
amabito | 11 days ago
What you’ve described sounds a lot like layered containment:
Loop budget (hard recursion bound)
Progressive checks (soft convergence control)
Sleep cycles (temporal isolation)
Deep sleep cap (bounded self-modification)
Git rollback (failure domain isolation)
Out of curiosity, have you measured amplification?
For example: total LLM calls per wake cycle, or per deep sleep?
I’m starting to think agent systems need amplification metrics the same way distributed systems track retry amplification.
rsdza|11 days ago
So far it seems pretty sane with Claude and incredibly boring with OpenAI (OpenAI models just don't want to show any initiative)
One thing I neglected to mention is that it manages its own sleep duration and it has a 'wakeup' cli command. So far the agents (i prefer to call them creatures :) ) do a good job of finding the wakeup command, building scripts to poll for whatever (e.g. github notifications) and sleeping for long periods.
There's a daily cost cap, but I'm not yet making the creatures aware of that budget. I think I should do that soon because that will be an interesting lever
rsdza|11 days ago