(no title)
edding4500 | 10 months ago
Why are all these posts and news about LLMs so uninformed? This is human built technology. You can actually read up how these things work. And yet they are treated as if it were an alien species that must be examined by sociological means and methods where it is not necessary. Grinds my gears every time :D
whoami_nr|10 months ago
edding4500|10 months ago
alew1|10 months ago
aeonik|10 months ago
https://github.com/huggingface/transformers/blob/d538293f62f...
im3w1l|10 months ago
kurikuri|10 months ago
Think of each new ‘interaction’ with the LLM as having two things that can change: the context and the PRNG state. We can also think of the PRNG state as having two things: the random seed (which makes the output sequence), and the index of the last consumed random value from the PRNG. If the context, random seed, and index are the same, then the LLM will always give the same answer. Just to be clear, the only ‘randomness’ in these state values comes from the random seed itself.
The LLM doesn’t make any randomness, it needs randomness as an input (hyper)parameter.
kbelder|10 months ago
I think a more useful approach is to give the LLM access to an api that returns a random number, and let it ask for one during response formulation, when needed.
throwawaymaths|10 months ago
kerkeslager|10 months ago
EDIT: I'm seeing another poster saying "Deterministic with a random seed?" That's a good point--all the non-determinism comes from the seed, which isn't particularly critical to the algorithm. One could easily make an LLM deterministic by simply always using the same seed.
dist-epoch|10 months ago
not fully true, when using floating point the order of operations matters, and it can vary slightly due to parallelism. I've seen LLMs return different outputs with the same seed.
_joel|10 months ago
edding4500|10 months ago
chaoz_|10 months ago
While you can definitely read about how some parts of a very complex neural network function, it's very challenging to understand the underlying patterns.
That's why even the people who invented components of these networks still invest in areas like mechanistic interpretability, trying to develop a model of how these systems actually operate. See https://www.transformer-circuits.pub/2022/mech-interp-essay (Chris Olah)
kaibee|10 months ago
1. Give a model a context with some # of actually random numbers and then ask it to generate the next random number. How random is that number? Repeat N times, graph the results, is there anything interesting about the results?
2. I remember reading about how brains/etc are kinda edge-balanced chaotic systems. So if a model is bad at outputting random numbers (ie: needs a very high temperature for the experiment from step 1 to produce a good distribution of random numbers) What if anything does that tell us about the model?
3. Can we add a training step/fine-tuning step that makes the model better at the experiment from step #2? What effect does that have on its benchmarks?
I'm not an ML researcher, so maybe this is still nonsense.