(no title)
icoder | 7 months ago
But maybe that's simply the solution, like the solution to original neural nets was (perhaps too simply put) to wait for exponentially better/faster hardware.
icoder | 7 months ago
But maybe that's simply the solution, like the solution to original neural nets was (perhaps too simply put) to wait for exponentially better/faster hardware.
crazylogger|7 months ago
It only mattered that human brains are just big enough to enable tool use and organization. It ceased to matter once our brains are past a certain threshold. I believed LLMs are past this threshold as well (it has not 100% matched human brain or ever will, but this doesn't matter.)
An individual LLM call might lack domain knowledge, context and might hallucinate. The solution is not to scale the individual LLM and hope the problems are solved, but to direct your query to a team of LLMs each playing a different role: planner, designer, coder, reviewer, customer rep, ... each working with their unique perspective & context.
SketchySeaBeast|7 months ago
woah|7 months ago
You could say the exact same thing about the original GPT. Brute forcing has gotten us pretty far.
jjmarr|7 months ago
the8472|7 months ago
Pointy sticks and ASML's EUV machines were designed by roughly the same lumps of compute-fat :)
unknown|7 months ago
[deleted]
SauciestGNU|7 months ago
unknown|7 months ago
[deleted]
billti|7 months ago
Not sure if that's a good parallel, but seems plausible.
cfn|7 months ago
qoez|7 months ago
simondotau|7 months ago
emp17344|7 months ago
short_sells_poo|7 months ago
I struggle to imagine how much further a purely text based system can be pushed - a system that basically knows that 1+1=2 not because it has built an internal model of arithmetic, but because it estimates that the sequence of `1+1=` is mostly followed by `2`.