top | item 44888725

(no title)

brainwipe | 6 months ago

The title is irritating, conflating AI with LLMs. LLMs are a subset of AI. I expect future systems will be mobs of expert AI agents rather than relying on LLMs to do everything. An LLM will likely be in the mix for at least the natural language processing but I wouldn't bet the farm on them alone.

discuss

order

DanHulton|6 months ago

That battle was long-ago lost when the leading LLM companies and organizations insisted on referring to their products and models solely as "AI", not the more-specific "LLMs". Implementers of that technology followed suit, and that's just what it means now.

You can't blame the New Yorker for using the term in its modern, common parlance.

dasil003|6 months ago

Agreed, and ultimately it's fine because they're talking about products not technology. If these products go in a completely different direction and LLMs become obsolete the AI label will adapt just fine. Once these things hit common parlance there's no point in arguing technical specificity as 99.99% of the people using the term don't care, will never care, and language will follow their usage not the angry pedant.

IAmGraydon|6 months ago

This is something I immediately noticed when ChatGPT first released. It was instantly called "AI", but previous to that, HN would have been up in arms that it's "machine learning" not actual intelligence. For some reason, the crowd here and everywhere else just accepted the misuse of the word intelligence and let it happen. Elsewhere I can understand, but people here know better.

Intentionally misconstruing it as actual intelligence was all a part of the grift from the beginning. They've always known there's no intelligence behind the scenes, but pushing this lie has allowed them to take in hundreds of billions in investor money. Perhaps the biggest grift the world has ever seen.

brookst|6 months ago

Sure I can. If someone writing for the New Yorker has conflated the two concepts and is drawing bad conclusions because of it, that’s bad writing.

A good writer would tease apart this difference. That’s literally what good writing is about: giving a deeper understanding than a lay person would have.

DanielHB|6 months ago

The computing power alone of all these gpus would bring a revolution in simulation software. I mean 0 AI/machine-learning, just being able to simulate much more things than we can.

Most industry-specific simulation software is REALLY crap, most from the 90s and 80s and barely evolved since then. Many stuck on single core CPUs.

bee_rider|6 months ago

It could be a nice side-effect of having all this “LLM hardware” built into everything, nice little throughput focused accelerators in everybody’s computers.

I think if I were starting grad school now and wanted some easy points, I’d be looking at mixed precision numerical algorithms. Either coming up with new ones, or applying them in the sciences.

simonw|6 months ago

If the New Yorker published a story titled "What if LLMs Don't Get Better Than This?" I expect the portion of their readers who understood what that title meant would be pretty tiny.

rusk|6 months ago

Indeed, and the title itself contains an operational definition of AI as “this” (LLM’s) - if AI becomes more than “this” then the question has been answered in the affirmative.

dr_dshiv|6 months ago

AI is what people think AI is. In the 80s, that was expert systems. In the 2000s, it was machine learning (not expert systems). Now, it is LLMs — not machine learning.

You can complain, but it’s like that old man shaking their fist at the clouds.

Now, if you want to talk about cybernetics…

tim333|6 months ago

The title annoys me more because if doesn't mention anything about time. AI will almost certainly get a good bit better eventually. The questions will it in the next couple of years or will we have to wait for some breakthrough.

I'm amused they seem to refer to Marcus and Zitron as "these moderate views of A.I". They are both pretty much professional skeptics who seem to fill their days writing AI is rubbish articles.

svara|6 months ago

AI is LLMs now. Similar to how machine learning became AI 5-10 years ago.

I'm not endorsing this, just stating an observation.

I do a lot of deep learning for computer vision, which became AI a while ago. Now, when you use the word AI in this context, it will confuse people because it doesn't involve LLMs.

lokar|6 months ago

A* search, literally textbook AI, is still doing great work.