Hey @yaront111, Cordum looks like a solid piece of infrastructure especially the Safety Kernel and the NATS based dispatch.
My focus with Faramesh.dev is slightly upstream from the scheduler. I’m obsessed with the Canonicalization problem. Most schedulers take a JSON payload and check a policy, but LLMs often produce semantic tool calls that are messy or obfuscated.
I’m building CAR (Canonical Action Representation) to ensure that no matter how the LLM phrases the intent, the hash is identical. Are you guys handling the normalization of LLM outputs inside the Safety Kernel, or do you expect the agent to send perfectly formatted JSON every time?
That’s a sharp observation.
You’re partially right CAP (our protocol) handles the structural canonicalization. We use strict Protobuf/Schematic definitions, so if an agent sends a messy JSON that doesn't fit the schema, it’s rejected at the gateway. We don't deal with 'raw text' tool calls in the backend.
But you are touching on the semantic aliasing problem (e.g. rm -rf vs rm -r -f), which is a layer deeper.
Right now, we rely on the specific Worker to normalize those arguments before they hit the policy check, but having a universal 'Canonical Action Representation' upstream would be cleaner.
If you can turn 'messy intent' into a 'deterministic hash' before it hits the Cordum Scheduler, that would be a killer combo. Do you have a repo/docs for CAR yet?
amjadfatmi1|1 month ago
My focus with Faramesh.dev is slightly upstream from the scheduler. I’m obsessed with the Canonicalization problem. Most schedulers take a JSON payload and check a policy, but LLMs often produce semantic tool calls that are messy or obfuscated.
I’m building CAR (Canonical Action Representation) to ensure that no matter how the LLM phrases the intent, the hash is identical. Are you guys handling the normalization of LLM outputs inside the Safety Kernel, or do you expect the agent to send perfectly formatted JSON every time?
yaront111|1 month ago