(no title)
orbital-decay | 20 hours ago
1. Same input = same output. This can be called determinism, and it's technically rather trivial to achieve in the lifetime of a single model snapshot - it's just a matter of business need, because you pay extra for worse batching. It's harder if you need to extend the guarantee into the future, as you need to keep the snapshot and inference method the same. It's also a relatively niche thing, only required for build reproducibility, supply chain security, this kind of stuff.
2. Zero error rate with arbitrary inputs and outputs. This is not determinism and it's also NOT achievable in any model at all because the domain LLMs (and humans!) operate in is fundamentally ambiguous. If you want to enforce the formal rules, verify your inputs and outputs formally! Trying to solve it purely with intelligence (human or machine) is a fool's errand. You can keep the error rate low enough, but you can't guarantee the absence of errors due to the nature of intelligence.
zbyforgotp|16 hours ago