top | item 47131049

(no title)

preommr | 6 days ago

I am not saying this to be sarcastic - the problem is that people from OpenAI/Antrhopic are saying things like superintelligence in 3 years, or boris saying coding is solved and that 100% of his code is written by AI.

It's not good enough to just say oreo ceos say we need to more oreos.

There's a real grey area where these tools are useful in some capacity, and in that confusion we're spending billions. Too may people are saying too conflicting things and chaos is never good for clear long-term growth.

Either that 20 years is completelly inapplicable to AI, or we're in for a world of hurt. There's no in between given the kinds of bets that have been made.

discuss

order

ozim|6 days ago

AI companies don’t have 20 years, they have max 5 years where they have to turn to profit.

They don’t have time to wait for all the companies to pick up use of AI tooling in their own pace.

So they lie and try to manufacture demand. Well demand is there but they have to manufacture FOMO so that demand materializes now and not in 20 or 10 years.

rfv6723|6 days ago

This outlook is as short-sighted as the 2000 fiber optic bust. Critics then thought overcapacity meant the end, yet that infrastructure eventually created the modern internet. Capital does not walk away from a fundamental shift just because of one market correction. While specific companies may fail, the long-term value of the technology ensures that investment will continue far beyond a five-year window.

Terr_|6 days ago

It's a "Motte and Bailey" system [0], where the extreme "AI will do everything for you" claim keeps getting thrown around to try to get investors to throw in cash, but then somehow it transmutes into "all technologies took time to mature stop being mean to me."

To be fair, it isn't necessarily the same people doing both at once. Sometimes there are two groups under the same general banner, where one makes the big-claims, and another responds to perceived criticism of their lesser-claim.

[0] https://en.wikipedia.org/wiki/Motte-and-bailey_fallacy

flowerthoughts|5 days ago

> the problem is that people from OpenAI/Antrhopic are saying things like superintelligence in 3 years

An even bigger problem is that people listen to them even after they say rationally implausible things. When even Yann LeCunn is putting his arms up and saying "this approach won't work," it's pretty bad.

co_king_5|6 days ago

[deleted]

bigstrat2003|6 days ago

> I'm going to be honest, you can feel the AGI when you use newer agentic tools like OpenClaw or Claude.

You're right. I can feel how far away it is and how these tools will in no way be capable of getting us there.

arctic-true|6 days ago

Researchers looked at GPT-3 in 2023 and saw “sparks of AGI”. The saying “feel the AGI” became widespread not long after, if I’m remembering right. We’ve been saying AGI is right around the corner for a while now. And of course, if you predict the end of the world every day, you’ll eventually be right. But for the moment, what we have is an exceptionally powerful coding assistant that can also speed up entry-level work in various other white collar industries. That is earth-shattering, paradigm-shifting. But given how competitive and expensive the AI game has become, that is not enough, so it needs to be “superintelligence” - and it’s just not.

EA-3167|6 days ago

It’s amazing that economic analysis can be dismissed by “feeling the AGI”.

You might as well be telling people to “HODL”

lanstin|6 days ago

Have you ever tried to trick an LLM? Did you have trouble?

chrysoprace|6 days ago

What does that mean? By what metric do you measure "AGI", whatever that means? Industry definitions are incredibly vague, perhaps intentionally so, with no benchmarks to define how a model, harness, or other technology might achieve "AGI". They have no intelligence, and can't even reason that you need to take your car to the car wash to have it washed[0].

[0] https://news.ycombinator.com/item?id=47031580

AnimalMuppet|6 days ago

> Superintelligence in 3 years doesn't really sound that crazy given how quickly I can write code with Claude. I mean we're 90%-95% of the way there already.

Yeah? So you must have a clear idea of where "there" is, and of the route from here to there?

Forgive me my skepticism, but I don't believe you. I don't believe that you actually know.