But we don't even need a human brain. We already have those, they take months to grow, take forever to train, and are forever distracted. Our logic-based processes will keep getting smaller and less power hungry as we figure out how to implement them at even lower scales, and eventually we'll be able to solve problems with the same building blocks as evolution but in intelligent ways, of which LLMs will likely only play a minuscule part of the larger algorithms.
andai|3 months ago
They're not that great at knowledge (and we're currently wasting most of the neurons on memorizing common crawl, which... have you looked at common crawl?)
They're not that great at determinism (a good solution here is that the LLM writes 10 lines of Python, which then feed back into the LLM. Then the task completes 100% of the time, and much cheaper too).
They're not that great at complex rules (surprisingly good actually, but expensive and flakey). Often we are trying to simulate what are basically 50 lines of Prolog with a trillion params and 50KB of vague English prompts.
I think if we figure out what we're actually trying to do with these things, then we can actually do each of those things properly, and the whole thing is going to work a lot better.
kryogen1c|3 months ago
This is a weird argument considering LLMs are composed of the output of countless hours of human brains. That makes LLMs, by definition, logarithmically worse at learning.
rekrsiv|3 months ago
DrierCycle|3 months ago
rekrsiv|3 months ago