top | item 47074297

(no title)

amabito | 12 days ago

This is interesting.

It looks less like a “model failure” and more like a containment failure.

When agents audit themselves, you’re effectively running recursive evaluation without structural bounds.

Did you enforce any step limits, retry budgets, or timeout propagation?

Without those, self-evaluation loops can amplify errors pretty quickly.

discuss

order

rsdza|12 days ago

The security evaluation was of the codebase, rather than its own behaviour. It just happened to be _its_ codebase.

W.r.t the self evaluation of the 'dreamer' genome (think template), this is... not possible to answer briefly

The dreamer's normal wake cycle has a 80 loop budget with increasingly aggressive progress checks injected every 15 actions. When sleeping after a wake cycle it (if more than 5 actions were taken) 'dreams' for a maximum of 10 iterations/actions.

Every 10 wake cycles it does a deep sleep which triggers a self-evaluation capped at 100 iterations, where changes to the creatures source code and files and, really, anything are done.

The creature can also alter its source and files at any point.

The creature lives in a local git repo so the orchestrator can roll back if it breaks itself.

amabito|12 days ago

That’s actually a pretty disciplined setup.

What you’ve described sounds a lot like layered containment:

Loop budget (hard recursion bound)

Progressive checks (soft convergence control)

Sleep cycles (temporal isolation)

Deep sleep cap (bounded self-modification)

Git rollback (failure domain isolation)

Out of curiosity, have you measured amplification?

For example: total LLM calls per wake cycle, or per deep sleep?

I’m starting to think agent systems need amplification metrics the same way distributed systems track retry amplification.