(no title)
WithinReason | 22 days ago
> As a consequence, the model is no longer deterministic at the sequence-level, but only at the batch-level
therefore they are deterministic when the batch size is 1
Your second source lists a large number of ways how to make LLMs determnistic. The title of your third source is "Defeating Nondeterminism in LLM Inference" which also means that they can be made deterministic.
Every single one of your sources proves you wrong, so no more sources need to be cited.
rvz|9 days ago
This is like saying: "C++ is 'safe' when you turn off all the default features and if you know what you are doing", but up to the point where it becomes absolutely useless, and it's still not safe.
The language is still fundamentally memory unsafe, just like how LLMs are fundamentally deep neural networks which that comes with downsides such as being unpredictable blackboxes which carry lots of non-determinism of outputs with that.
> Your second source lists a large number of ways how to make LLMs determnistic. The title of your third source is "Defeating Nondeterminism in LLM Inference" which also means that they can be made deterministic.
That is the point: "When", "Can be made", "how to make LLMs determnistic".
It just tells you something about why both papers recognise the problem of non-determinism in LLMs which makes my whole point even more valid which is why I linked those papers.
Those papers have highlighted the fundamental nature of these LLMs right at the start of the paper.
> Every single one of your sources proves you wrong, so no more sources need to be cited.
Nope, it is quite the opposite.