Deterministic Governance: mechanical exclusion / bit-identical
5 points| verhash | 1 month ago |github.com
There is no ranking, sampling, or temperature. Given identical inputs, configuration, and substrate, the system always produces bit-identical outputs, verified by repeated hash checks. The implementation explores different elastic modulus formulations that change how alignment and proximity contribute to stress, without changing the deterministic nature of the process. The intent is to examine what governance looks like when exclusion is causal, replayable, and mechanically explainable rather than statistical. Repository: https://github.com/Rymley/Deterministic-Governance-Mechanism
foobarbecue|1 month ago
nextaccountic|1 month ago
Problem is, we sometimes want LLMs to be probabilistic. We want to be able to try again if the first answer was deemed unsuccessful
Nevermark|1 month ago
> At each step, stress increments are computed from measurable terms such as alignment and proximity to a verified substrate.
Well obviously its ... uh, ...
It may not be, but the whole description reads as category error satire to me.
gwern|1 month ago
verhash|1 month ago
“The sky is blue”
“Water is wet”
Candidate outputs:
“The sky is blue”
“The sky is green”
Each sentence is embedded deterministically (in the demo, via a hash-based mock embedder so results are reproducible). For each candidate, I compute:
similarity to the closest verified fact
distance from that fact
a penalty function based on those values
Penalty accumulates over a fixed number of steps. If it exceeds a fixed threshold, the candidate is rejected. In this example, “The sky is blue” stays below the threshold; “The sky is green” crosses it and is excluded.
What I tested:
Identical inputs + identical config always produce identical outputs (verified by hashing a canonical JSON of inputs + outputs).
Re-running the same scenario repeatedly produces the same decision and the same hash.
Changing a single parameter (distance, threshold, steps) predictably changes the outcome.
Why this isn’t “AI slop”:
There’s no generative model here at all.
The terminology is unfortunate but the code is explicit arithmetic.
The entire point is removing non-determinism, not adding hand-wavy intelligence.
If you think the framing obscures that rather than clarifies it, that’s useful feedback—I’m actively dialing the language back. But the underlying claim is narrow: you can build governance filters that are deterministic, replayable, and auditable, which most current AI pipelines are not.
If that’s still uninteresting, fair enough—but it’s not trying to be mystical or persuasive, just mechanically verifiable.
You can test it here if you like, https://huggingface.co/spaces/RumleyRum/Deterministic-Govern...
unknown|1 month ago
[deleted]