top | item 44902007

(no title)

byteknight | 6 months ago

I have to disagree. Anyone that says LLMs do not qualify as AI are the same people who will continue to move the goal posts for AGI. "Well it doesn't do this!". No one here is trying to replicate a human brain or condition in its entirety. They just want to replicate the thinking ability of one. LLMs represent the closest parallel we have experienced thus far to that goal. Saying that LLMs are not AI feel disingenuous at best and entirely purposely dishonest at the worst (perhaps perceived as staving off the impending demise of a profession).

The sooner people stop worrying about a label for what you feel fits LLMs best, the sooner they can find the things they (LLMs) absolutely excel at and improve their (the user's) workflows.

Stop fighting the future. Its not replacing right now. Later? Maybe. But right now the developers and users fully embracing it are experiencing productivity boosts unseen previously.

Language is what people use it as.

discuss

order

sarchertech|6 months ago

> the developers and users fully embracing it are experiencing productivity boosts unseen previously

This is the kind of thing that I disagree with. Over the last 75 years we’ve seen enormous productivity gains.

You think that LLMs are a bigger productivity boost than moving from physically rewiring computers to using punch cards, from running programs as batch processes with printed output to getting immediate output, from programming in assembly to higher level languages, or even just moving from enterprise Java to Rails?

skydhash|6 months ago

Even learning your current $EDITOR and $SHELL can be a great productivity booster. I see people claiming AI is helping them and you see them hunting for files in the file manager tree instead of using `grep` or `find` (Unix).

Espressosaurus|6 months ago

Or the invention of the container, or hell, the invention of the filing cabinet (back when computer was a job)

overgard|6 months ago

The studies I've seen for AI actually improving productivity are a lot more modest than what the hype would have you believe. For example: https://www.youtube.com/watch?v=tbDDYKRFjhk

Skepticism isn't the same thing as fighting the future.

I will call something AGI when it can reliably solve novel problems it hasn't been pre-trained on. That's my goal post and I haven't moved it.

jerf|6 months ago

!= is "not equal". The symbol for "not a subset of" is ⊄, which you will note, I did not use.

byteknight|6 months ago

I think you replied in the wrong place, bud. All the best.

EDIT - I see now. sorry.

For all intents and purposes of the public. AI == LLM. End of story. Doesn't matter what developers say.

leptons|6 months ago

So when an LLM all-too-often produces garbage, can we then call it "Artificial Stupidity"?

byteknight|6 months ago

Not sure how that fits. Do you produce good results every time, first try? Didn't think so.

oinfoalgo|6 months ago

In cybernetics, this label has existed for a long time.

Unfortunately, discourse has followed an epistemic trajectory influenced by Hollywood and science fiction, making clear communication on the subject nearly impossible without substantial misunderstanding.

parineum|6 months ago

> Anyone that says LLMs do not qualify as AI are the same people who will continue to move the goal posts for AGI.

I have the complete opposite feeling. The layman understanding of the term "AI" is AGI, a term that only needs to exist because researchers and businessmen hype their latest creations as AI.

The goalposts for AI don't move but the definition isn't precise but we know it when we see it.

AI, to the layman, is Skynet/Terminator, Asimov's robots, Data, etc.

The goalposts moving that you're seeing is when something the tech bubble calls AI escapes the tech bubble and everyone else looks at it and says, no, that's not AI.

The problem is that everything that comes out of the research efforts toward AI, the tech industry calls AI despite it not achieving that goal by the common understanding of the term. LLMs were/are a hopeful AI candidate but, as of today, they aren't but that doesn't stop OpenAI from trying to raise money using the term.

shkkmo|6 months ago

AI has had many, many lay meanings over the years. Simplistic decision trees and heuristics for video games is called AI. It is a loose term and trying to apply it with semantic rigour is useless, as is trying to tell people that it should only be used to match one of its many meanings.

If you want some semantic rigour use more specific terms like AGI, human equivalent AGI, super human AGI, exponentially self improving AGI, etc. Even those labels lack rigour, but at least they are less ambiguous.

LLMs are pretty clearly AI and AGI under commonly understood, lay definitions. LLMs are not human level AGI and perhaps will never be by themselves.

byteknight|6 months ago

"Just ask AI" is a phrase you will hear around enterprises now. You less often hear "Google it". You hear "ChatGPT it".

imiric|6 months ago

> The sooner people stop worrying about a label for what you feel fits LLMs best, the sooner they can find the things they (LLMs) absolutely excel at and improve their (the user's) workflows.

This is not a fault of the users. These labels are pushed primarily by "AI" companies in order to hype their products to be far more capable than they are, which in turn increases their financial valuation. Starting with "AI" itself, "superintelligence", "reasoning", "chain of thought", "mixture of experts", and a bunch of other labels that anthropomorphize and aggrandize their products. This is a grifting tactic old as time itself.

From Sam Altman[1]:

> We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence

Apologists will say "they're just words that best describe these products", repeat Dijkstra's "submarines don't swim" quote, but all of this is missing the point. These words are used deliberately because of their association to human concepts, when in reality the way the products work is not even close to what those words mean. In fact, the fuzzier the word's definition ("intelligence", "reasoning", "thought"), the more valuable it is, since it makes the product sound mysterious and magical, and makes it easier to shake off critics. This is an absolutely insidious marketing tactic.

The sooner companies start promoting their products honestly, the sooner their products will actually benefit humanity. Until then, we'll keep drowning in disinformation, and reaping the consequences of an unregulated marketplace of grifters.

[1]: https://blog.samaltman.com/the-gentle-singularity