top | item 46496673

(no title)

demirbey05 | 1 month ago

It's hard to cut through the AI hype when there are billions of dollars at stake. I usually trust negative comments more, as long as the person isn't trying to sell a course. Even though Terence Tao is a respected scientist, I wonder if his recent comments are driven by a need for funding due to federal cuts. I’ve had similar experiences with LLMs—whenever I ask them about hard math or RL theory, they almost always give me the wrong answers.

discuss

order

ben_w|1 month ago

I also care more about the failure modes than the successes, although in my case, it's because I keep finding them exceptionally useful at software development, and I:

1. Don't want to use them where they suck.

Think normalisation of deviance: "the problems haven't affected me therefore they don't exist" is a way to get really badly burned.

2. Want to train up in things they will still suck at by the time I've leared whatever it is.

I find LLMs seem kinda bad at writing sheet music, and Suno is kinda bad at werid instructions (like Stable Diffusion for images), but I expect them to get good before I can.

I also find them inconsistent at non-Euclidian problems: sometimes they can, sometimes they can't. I have absolutely no idea how to monetise that, but even if I could, "inconsistent" is itself an improvement on "cannot ever" which is what SOTA was a few years ago.