top | item 43380964

(no title)

numba888 | 11 months ago

> where every LLM hits some arbitrary amount of context that causes it to "lose focus" and develop a sort of LLM ADD.

Humans brains have the same problem. As any intelligence probably. Solution for this is structural thinking. One piece at a time, often top-down. Educated humans do it, LLM can be orchestrated to do it too. Effective context window will be limited even though some claim millions of tokens.

discuss

order

No comments yet.