(no title)
clord | 7 months ago
Makes me wonder if future llms will be composing nonlinear things and be able to work in non-token-order spaces temporarily, or will have a way to map their output back to linear token order. I know nonlinear thinking is common while writing code though. current llms might be hiding a deficit by having a large and perfect context window.
hnuser123456|7 months ago
altruios|7 months ago
saurik|7 months ago
UltraSane|7 months ago
saurik|7 months ago
undfined|7 months ago
kenjackson|7 months ago
lelanthran|7 months ago
Shouldn't be hard to train a coding LLM to do this too by doubling the training time: train the LLM both forwards and backwards across the training data.
jdiff|7 months ago