top | item 45898659

(no title)

Jackson__ | 3 months ago

From the outside, it always looked like they gave LeCun just barely enough compute for small scale experiments. They'd publish a promising new paper, show it works at a small scale, then not use it at all for any of their large AI runs.

I would have loved to see a VLM utilizing JEPA for example, but it simply never happened.

discuss

order

sakex|3 months ago

I'd be surprised if they didn't scale it up.

tucnak|3 months ago

The obvious explanation is they have scaled it up, but it turned out to be total shite, like most new architectures.