top | item 43914454

(no title)

sweezyjeezy | 10 months ago

Only skimmed, but both seem to be referring to what transformers can do in a single forward pass, reasoning models would clearly be a way around that limitation.

o4 has no problem with the examples of the first paper (appendix A). You can see its reasoning here is also sound: https://chatgpt.com/share/681b468c-3e80-8002-bafe-279bbe9e18.... Not conclusive unfortunately since this is in date-range of its training data. Reasoning models killed off a large class of "easy logic errors" people discovered from the earlier generations though.

discuss

order

namaria|9 months ago

Your unwillingness to engage with the limitations of the technology explains a lot of the current hype.