For some reasons LLMs get a lot of attention. But.. while simplicity is great it has limits. To make model reason you have to put it in a loop with fallbacks. It has to try possibilities and fallback from false branches. Which can be done on a level higher. This can be either algorithm, another model, or another thread in the same model. To some degree it can be done by prompting in the same thread. Like asking LLMs to first print high level algorithm and then do it step by step.
layer8|1 year ago
Salgat|1 year ago
llm_trw|1 year ago
refulgentis|1 year ago
Source? TFA, i.e. the thing we're commenting on, tried to, and seems to, show the opposite
astromaniak|1 year ago
cma|1 year ago