You're just using words incorrectly. Deterministic means repeatable. That's it. Predictable, verifiable, etc are tangential to deterministic. Your points are largely correct but you're not using the right words which just obfuscates your meaning.
Nope. You have not shown how a large scale collection of neural networks irrespective of their architecture is more deterministic when compared to a 'compiler' and only repeating a known misconception of tweaking the temperature to 0 which does not bring the determinism you claim it brings with LLMs [0] [1] [2], otherwise you would not have this problem in the first place.
By even doing that, the result of the outputs are useless anyway. So this really does not help your point at all. So therefore:
> You're just using words incorrectly. Deterministic means repeatable. That's it. Predictable, verifiable, etc are tangential to deterministic.
There is nothing deteministic or predictable about an LLM even when you compare it to a compiler, unless you can guarrantee that the individual neurons through inference give a predictable output which would be useful enough for being a drop-in compiler replacement.
> You have not shown how a large scale collection of neural networks irrespective of their architecture is more deterministic
Its software. Without an external randomness source, its 100% deterministic excluding impacts of hardware glitches. This...isn’t debatable. You can make it seem non-deterministic by concealing inputs (e.g., when batching multiple requests, any given request is “nondeterministic” when viewed in isolation in many frameworks because batches use shared state and aren’t isolated), but even then it is still deterministic you are just choosing to look at an incomplete set of the inputs that determine the output.
Yes, there's some unknown sources of non-determinism when running production LLM architectures at full capacity. But that's completely irrelevant to the point. The core algorithm is deterministic. And you're still conflating deterministic and predictable. It's strange to have such disregard for the meaning of words and their correct usage.
rvz|23 days ago
By even doing that, the result of the outputs are useless anyway. So this really does not help your point at all. So therefore:
> You're just using words incorrectly. Deterministic means repeatable. That's it. Predictable, verifiable, etc are tangential to deterministic.
There is nothing deteministic or predictable about an LLM even when you compare it to a compiler, unless you can guarrantee that the individual neurons through inference give a predictable output which would be useful enough for being a drop-in compiler replacement.
[0] https://152334h.github.io/blog/non-determinism-in-gpt-4/
[1] https://arxiv.org/pdf/2506.09501
[2] https://thinkingmachines.ai/blog/defeating-nondeterminism-in...
dragonwriter|10 days ago
Its software. Without an external randomness source, its 100% deterministic excluding impacts of hardware glitches. This...isn’t debatable. You can make it seem non-deterministic by concealing inputs (e.g., when batching multiple requests, any given request is “nondeterministic” when viewed in isolation in many frameworks because batches use shared state and aren’t isolated), but even then it is still deterministic you are just choosing to look at an incomplete set of the inputs that determine the output.
hackinthebochs|23 days ago
unknown|23 days ago
[deleted]