top | item 47056873

(no title)

sockaddr | 12 days ago

I suspect a more general and much more clever learning algorithm will emerge by then and will require less training data to get to a competent problem solving state faster even with dirty data. Something able to discriminate between novel information and junk. Until then I think there will be a quality decline after a few more years.

discuss

order

c22|12 days ago

How will it emerge? In the past we've been told that the a(g)i will write itself, rapidly iterating itself into a super intelligence that handily solves all our current and future problems, but it's beginning to look like a chicken or the egg scenario.

Living systems were able to brute force their way to human brain, but it took billions of years and access to parallel processes that make the entire collective history of human computation seem like a mote to a star.

What novel spark do you see accelerating this process to such a hyperbolic extreme?

plopz|12 days ago

I would imagine a trajectory similar to AlphaGo, it starts out trying to replicate humans and then at a certain point pivots to entirely self-play. I think the main hurdle with llms, is that there isn't a strong reward target to go after. It seems like the current target is to simply replicate humans, but to go beyond that they will need a different target.

Jean-Papoulos|12 days ago

It took a billion years to get to the tool-making state, and then less than a 1000th of that time to making CPUs. Then a 1000th of that time to make LLMs. We are in a parabolic extreme