top | item 40656939

(no title)

nadam | 1 year ago

I love this, this is super interesting, but my intuition based on looking at a dozen examples is that the problem is hard, but easy enough that if this problem becomes popular, near-human level results will appear in a year or less, and AGI will not be reached. The problem seems to be finding a generic enough transformation description language with the appropriate operators. And then heuristics to find a very short program (in the information theoretical sense) in this language that produces all the examples for a problem. I would be very surprised if we would not increase the 34% result soon significantly, and I would be surprised if this could be transferred to general intelligence, at least when I think of the topics where I use AI today and where it falls short yet. Basically my intuition is that this will be yet another 'Chess' or 'Go'-like problem in AI. But still a worthwhile research topic, absolutely: the value that could come out of this is well worth the 1M dollars.

discuss

order

zug_zug|1 year ago

I have the exact same impression.

Imo there's no evidence whatsoever that nailing this task will be true AGI - (e.g. able to write novel math proofs, ask insightful questions that nobody has thought of before, self-direct its own learning, read its own source code)

apendleton|1 year ago

I'm not sure the goal of this competition, in and of itself, is AGI. They point to current LLMs emerging from transformers, which in turn emerged from a general basket of building blocks from machine-translation research (attention, etc.). It seems like the suggestion is that to get from where we are now to AGI, some fundamental building blocks are missing, and this is an attempt to spur the development of some of those building blocks, but by analogy with LLMs, the goal here is to come up with a new thing like "attention," not a new thing like GPT4.