top | item 46823849

Deterministic Governance: mechanical exclusion / bit-identical

5 points| verhash | 1 month ago |github.com

This repository implements a deterministic exclusion engine where governance decisions are treated as a mechanical process rather than a probabilistic one. Candidates exist as stateful objects that accumulate strain under a scheduled constraint pressure. Pressure is applied across explicit phases—nucleation, quenching, and crystallization—and exclusion occurs only when accumulated stress exceeds a fixed yield threshold. Once fractured, a candidate cannot re-enter; history matters.

There is no ranking, sampling, or temperature. Given identical inputs, configuration, and substrate, the system always produces bit-identical outputs, verified by repeated hash checks. The implementation explores different elastic modulus formulations that change how alignment and proximity contribute to stress, without changing the deterministic nature of the process. The intent is to examine what governance looks like when exclusion is causal, replayable, and mechanically explainable rather than statistical. Repository: https://github.com/Rymley/Deterministic-Governance-Mechanism

14 comments

order

foobarbecue|1 month ago

I don't even understand what discipline we're talking about here. Can someone provide some background please?

nextaccountic|1 month ago

The thing that lets LLMs select the next token is probabilistic. This proposed a deterministic procedure

Problem is, we sometimes want LLMs to be probabilistic. We want to be able to try again if the first answer was deemed unsuccessful

Nevermark|1 month ago

> Quenching is higher-frequency pressure application that amplifies contradictions and internal inconsistencies.

> At each step, stress increments are computed from measurable terms such as alignment and proximity to a verified substrate.

Well obviously its ... uh, ...

It may not be, but the whole description reads as category error satire to me.

gwern|1 month ago

OK, this is AI slop ("fracture" alone gives it away). But maybe there's still something of value here? Can you explain it in actual human terms, give a real example, and explain what you did to test this and why I shouldn't flag this like I did https://news.ycombinator.com/item?id=46701114 ?

verhash|1 month ago

Verified facts:

“The sky is blue”

“Water is wet”

Candidate outputs:

“The sky is blue”

“The sky is green”

Each sentence is embedded deterministically (in the demo, via a hash-based mock embedder so results are reproducible). For each candidate, I compute:

similarity to the closest verified fact

distance from that fact

a penalty function based on those values

Penalty accumulates over a fixed number of steps. If it exceeds a fixed threshold, the candidate is rejected. In this example, “The sky is blue” stays below the threshold; “The sky is green” crosses it and is excluded.

What I tested:

Identical inputs + identical config always produce identical outputs (verified by hashing a canonical JSON of inputs + outputs).

Re-running the same scenario repeatedly produces the same decision and the same hash.

Changing a single parameter (distance, threshold, steps) predictably changes the outcome.

Why this isn’t “AI slop”:

There’s no generative model here at all.

The terminology is unfortunate but the code is explicit arithmetic.

The entire point is removing non-determinism, not adding hand-wavy intelligence.

If you think the framing obscures that rather than clarifies it, that’s useful feedback—I’m actively dialing the language back. But the underlying claim is narrow: you can build governance filters that are deterministic, replayable, and auditable, which most current AI pipelines are not.

If that’s still uninteresting, fair enough—but it’s not trying to be mystical or persuasive, just mechanically verifiable.

You can test it here if you like, https://huggingface.co/spaces/RumleyRum/Deterministic-Govern...