top | item 47095541

(no title)

alexgarden | 9 days ago

Yah... how does this evolve... this is the big question. Honest answer? We'll see.

My opinion? Human-in-the-loop will get thinner over time. As that happens, the accountability chain has to thicken. If we want any notion of reliable trust, these scales have to balance. Note: I don't think this scales without it.

Broadly speaking (I've talked a lot about life in the post-rules universe), we (humans) stop signing actions and start signing policies - policies in this case are declarative envelopes of defined agent automation boundaries.

Couple this with a proof system that can (cryptographically) prove that the agent stayed between the lines.

Build on that... trust between agents becomes computable. If A trusts B, you have a derivable trust score (with ~ decay) and naturally Quorum models fall out of that.

Then you get to proof composition - essentially "instead of verifying /checkpoint you verify a proof for an entire session - the math guarantees nothing was skipped. Human only needs to see the summary.

All of this presumes the policy was correct to begin with. This approach isn't a substitute for "don't write sloppy policy or be an asshole."

discuss

order

No comments yet.