lol posted the same thing above. I’m glad I’m not the only one who thought that was an extremely powerful read! As someone currently trying to imbue silicon with an eternal soul, it’s very sobering.
AGI feels like a marketing term to me now. I mainly see it as just research into how can we improve and scale the current model architectures we have to be better than the last one
Eh regardless of all that the answer is just “yes” imo: self-improvement is a necessary step to meaningful superintelligence. Perhaps not AGI but it obviously seems like a massive help if not absolutely strictly necessary.
In a way, that’s what modern multi-stage-trained foundational models already do: improve their weights intelligently. Having the same results in human readable code (what this is a first step towards) would be a lot more powerful…
mitthrowaway2|2 years ago
https://www.lesswrong.com/posts/kpPnReyBC54KESiSn/optimality...
clbrmbr|2 years ago
> It is doing this for no greater reason than that an optimiser was brought into reach, and this is what optimisers do.
> All the agent piece has to do is pump the optimality machine.
It’s like putting the mic too close to the amplifier.
bbor|2 years ago
graphe|2 years ago
PrayagBhakar|2 years ago
bbor|2 years ago
In a way, that’s what modern multi-stage-trained foundational models already do: improve their weights intelligently. Having the same results in human readable code (what this is a first step towards) would be a lot more powerful…
mxsjoberg|2 years ago