top | item 47100915

(no title)

hackinthebochs | 9 days ago

Sure, reliability is a problem for the current state of LLMs. But I see no reason to think that's an in principle limitation.

discuss

order

logicprog|8 days ago

There are so many papers now showing that LLM "reasoning" is fragile and based on pattern-matching heuristics that I think it's worth considering that, while it may not be an in principle limitation — in the sense that if you gave an autoregressive predictor infinite data and compute, it'd have to learn to simulate the universe to predict perfectly — in practice we're not going to build Laplace's LLM, and we might need a more direct architecture as a short cut!