top | item 42480358

(no title)

sn0wr8ven | 1 year ago

Incredibly impressive. Still can't really shake the feeling that this is o3 gaming the system more than it is actually being able to reason. If the reasoning capabilities are there, there should be no reason why it achieves 90% on one version and 30% on the next. If a human maintains the same performance across the two versions, an AI with reason should too.

discuss

order

kmacdough|1 year ago

The point of ARC is NOT to compare humans vs AI, but to probe the current boundary of AIs weaknesses. AI has been beating us at specific tasks like handwriting recognition for decades. Rather, it's when we can no longer readily find these "easy for human, hard for AI" reasoning tasks that we must stop and consider.

If you look at the ARC tasks failed by o3, they're really not well suited to humans. They lack the living context humans thrive on, and have relatively simple, analytical outcomes that are readily processed by simple structures. We're unlikely to see AI as "smart" until it can be asked to accomplish useful units of productive professional work at a "seasoned apprentice" level. Right now they're consuming ungodly amounts of power just to pass some irritating, sterile SAT questions. Train a human for a few hours a day over a couple weeks and they'll ace this no problem.

tintor|1 year ago

o3 low and high are the same model. Difference is in how long was it allowed to think.

It works the same with humans. If they spend more time on the puzzle they are more likely to solve it.

cornholio|1 year ago

But does it matter if it "really, really" reasons in the human sense, if it's able to prove some famous math theorem or come up with a novel result in theoretical physics?

While beyond current motels, that would be the final test of AGI capability.

jprete|1 year ago

If it's gaming the system, then it's much less likely to reliably come up with novel proofs or useful new theoretical ideas.

xanderlewis|1 year ago

That would be important, but as far as I know it hasn’t happened (despite how often it’s intimated that we’re on the verge of it happening).

intended|1 year ago

Yeah, it really does matter if something was reasoned, or whether it appears if you metaphorically shake the magic 8 ball.

FartyMcFarter|1 year ago

How would gaming the system work here? Is there some flaw in the way the tasks are generated?

jprete|1 year ago

AI models have historically found lots of ways to game systems. My favorite example is exploiting bugs in simulator physics to "cheat" at games of computer tag. Another is a model for radiology tasks finding biases in diagnostic results using dates on the images. And of course whenever people discuss a benchmark publicly it leaks the benchmark into the training set, so the benchmark becomes a worse measure.

demirbey05|1 year ago

I am not expert in llm reasoning but I think because of RL. You cannot use AlphaZero to play other games.

ozten|1 year ago

Nope. AlphaZero taught itself to play games like chess, shogi, and Go through self-play, starting from random moves. It was not given any strategies or human gameplay data but was provided with the basic rules of each game to guide its learning process.

sgt101|1 year ago

I thought that AlphaZero could play three games? Go, Chess and Shogi?

GaggiX|1 year ago

Humans and AIs are different, the next benchmark would be build so that it emphasize the weak points of current AI models where a human is expected to perform better, but I guess you can also make a benchmark that is the opposite, where humans struggle and o3 has an easy time.

vectorhacker|1 year ago

I think you've hit the nail on the head there. If these systems of reasoning are truly general then they should be able to perform consistently in the same way a human does across similar tasks, baring some variance.

pkphilip|1 year ago

Yes, if a system has actually achieved AGI, it is likely to not reveal that information

dlubarov|1 year ago

AGI wouldn't necessarily entail any autonomy or goals though. In principle there could be a superintelligent AI that's completely indifferent to such outcomes, with no particular goals beyond correctly answering question or what not.

pkphilip|1 year ago

Not sure why I am being downvoted. Why would a sufficiently advanced intelligence reveal its full capabilities knowing fully well that it would then be subjected to a range of constraints and restraints?

If you disagree with me, state why instead of opting to downvote me