(no title)
nothis
|
1 year ago
I'm not pretending to understand half the words uttered in this discussion but I'm constantly reminded of how much it helps me to articulate things (explain them to others, write them down, etc) to understand them. Maybe that thinking indeed happens almost entirely on a linguistic level and I'm not doing half as much other thinking (visualization, abstract logic, etc.) in the process as I thought. That feels weird.
robwwilliams|1 year ago
error_logic|1 year ago
You could sort of represent the deterministic contents of an LLM by compiling all the algorithms and training data in some form, or maybe a visual mosaic of the weights and tokens, or what have you...but that still doesn't really explain the outcome when a model is presented with novel strings. The patterns are emergent properties that converge on familiar language--they're something deeper than the individual words that result.