top | item 47005870

(no title)

deathanatos | 18 days ago

> Nobody at this point disagrees we’re going to achieve AGI this century.

Nobody. Nobody disagrees, there is zero disagreement, there is no war in Ba Sing Se.

> 100% of today’s SWE tasks are done by the models.

Thank God, maybe I can go lie in the sun then instead of having to solve everyone's problems with ancient tech that I wonder why humanity is even still using.

Oh, no? I'm still untying corporate Gordian knots?

> There is no reason why a developer at a large enterprise should not be adopting Claude Code as quickly as an individual developer or developer at a startup.

My company tried this, then quickly stopped: $$$

discuss

order

dang|17 days ago

Can you please make your substantive points without snark? We're trying for a quite different kind of discussion here. This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.

You may not owe AGI enthusiasts better, but you owe this community better if you're participating in it.

deathanatos|17 days ago

(Since I cannot edit the post.)

These posts are so tiring. The statement is an outright and blatant lie, because it's grift. The grifter wants to silence dissent by rendering it "non-existent", so that the grift can take the position of being a foregone conclusion. There is no dissent. The statement is outrageous, given the obvious amount of dissent in the comments, and the positive reaction of my fellow commenters to it. "AI built a browser from scratch." It did not. "AI built a compiler." It can't compile hello world. "AGI is coming & nobody disagrees." But the truth takes its time getting its shoes on while a lie already spread across the world.

It's doubly tiring since I (and I suspect, many of this) are having AI stuffed down our gullets by our respective management chains. Any honest evaluation of AI comes to the result that it's nowhere near capable, routinely misses the mark, and probably takes more time to verify its answer than it does to use. But I suspect many people are just skipping the verification step.

& it's disappointing to see low-quality articles like this make it, time and again, and it feels like thoughtful discussion no longer moves minds these days.

I'll try to express this without the snark going forward, though.

stego-tech|18 days ago

> Nobody disagrees, there is zero disagreement, there is no war in Ba Sing Se.

This captures my chief irk over these sorts of "interviews" and AI boosterism quite nicely.

Assume they're being 100% honest that they genuinely believe nobody disagrees with their statement. That leaves one of two possible outcomes:

1) They have not ingested data from beyond their narrow echo chamber that could challenge their perceptions, revealing an irresponsible, nay, negligent amount of ignorance for people in positions of authority or power

OR

2) They do not see their opponents as people.

Like, that's it. They're either ignorant or they view their opposition as subhuman. There is no gray area here, and it's why I get riled up when they're allowed to speak unchallenged at length like this. Genuinely good ideas don't need this much defense, and genuinely useful technologies don't need to be forced down throats.

semiinfinitely|18 days ago

option 3: reject the premise that they're being 100% honest

this third option seems like the most reasonable option here? the way you worded this makes it seems like there are only these two options to reach your absurd conclusion

> like thats it

> There is no gray area here

re-examine your assumptions

SpicyLemonZest|17 days ago

I think you're being uncharitable towards option 2. When a physicist says "nobody disagrees that perpetual motion machines are impossible", are they saying that Jimbo who thinks he's built one in his garage is subhuman? Of course not. What they mean is that all experts who've seriously considered the issue agree, and they see so little substance in non-expert objections that it's not worth engaging.

palmotea|18 days ago

> 2) They do not see their opponents as people.

> Like, that's it. They're either ignorant or they view their opposition as subhuman.

I'm going to go a bit off topic, but tech people often just inhale sci-fi, and I think we ought to reckon the problems with that, especially when tech people get into position of power.

Take Dune, for instance. Everyone know Vladimir Harkonnen is a bad guy, but even the good-guy Atreides seem to be spending their time fighting and assassinating, Paul's jihad kills 60 billion people, and Leto II is a totalitarian tyrant. It's all elite power-and-dominance shit, not even the protagonists are good people when you think about it. Regular people merit barely a mention, and are just fodder.

Often the people are cardboard, and it's the (fantasy) tech and the "world building" that are the focus.

It doesn't seem like it'd be good influence on someone's worldview, especially when not balanced sufficiently by other influences.

co_king_3|18 days ago

> They do not see their opponents as people.

You hit the nail on their head.

They go out of their way to call you an "AI bot" if you say something that contradicts their delusional world view.

piva00|18 days ago

> My company tried this, then quickly stopped: $$$

How much were devs spending to become a sticking point?

I'm asking because I thought it'd be extremely expensive when it rolled out at the company I work for, we have dashboards tracking expenses averaged per dev in each org layer, the most expensive usage is about US$ 350/month/dev, the average hovers around US$ 30-50.

It's much cheaper than I expected.

Tenoke|18 days ago

Nobody out of people remotely worth listening to. There's always people deeply wrong about things but over 70 years at this point is a pretty insane position unless you have a great reason like expecting Taiwan to get bombed tomorrow and slow down progress.

cosmic_cheese|18 days ago

Probabilities have increased, but it's still not a certainty. It may turn out that stumbling across LLMs as a mimicry of human intelligence was a fluke and the confluence of remaining discoveries and advancements required to produce real AGI won't fall into place for many, many years to come, especially if some major event (catastrophic world war, systematic environmental collapse, etc) occurs and brings the engine of technological progress to a crawl for 3-5 decades.

ryandvm|18 days ago

I think the only people that don't think we're going to see AGI within the next 70 years are people that believe consciousness involves "magic". That is, some sort of mystical or quantum component that is, by definition, out of our reach.

The rest of us believe that the human brain is pretty much just a meat computer that differs from lower life forms mostly quantitatively. If that's the case, then there really isn't much reason to believe we can't do exactly what nature did and just keep scaling shit up until it's smart.

roxolotl|18 days ago

While I fully agree with your sentiment it’s striking Dario said “this century”. He likely won’t even be alive for about 50%, assuming he lives to 80, of his prediction window. It’s such a remarkably meaningless comment.

viking123|18 days ago

He was hawking the doubling of human lifespan to some boomers few months ago. The current AI is just religion in new clothes, mainly for people who see themselves as too smart to believe in God and heaven so believe in the AI and project everything to it.

yolo3000|18 days ago

> We pay humans upwards of $50 trillion in wages because they’re useful, even though in principle it would be much easier to integrate AIs into the economy than it is to hire humans

gas9S9zw3P9c|18 days ago

Can someone explain to me what AGI means? What is the concrete technical definition? How do we know it is achieved?

wrs|18 days ago

Microsoft and OpenAI had to define it in their agreement, and settled on “AI systems that can generate at least $100 billion in profits”. Which tells you where those folks are coming from.

reducesuffering|18 days ago

Dario defines it as:

'By powerful AI [he dislikes the baggage of AGI, but means the same], I have in mind an AI model—likely similar to today’s LLMs in form, though it might be based on a different architecture, might involve several interacting models, and might be trained differently—with the following properties:

In terms of pure intelligence, it is smarter than a Nobel Prize winner across most relevant fields – biology, programming, math, engineering, writing, etc. This means it can prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc.

In addition to just being a “smart thing you talk to”, it has all the “interfaces” available to a human working virtually, including text, audio, video, mouse and keyboard control, and internet access. It can engage in any actions, communications, or remote operations enabled by this interface, including taking actions on the internet, taking or giving directions to humans, ordering materials, directing experiments, watching videos, making videos, and so on. It does all of these tasks with, again, a skill exceeding that of the most capable humans in the world.

It does not just passively answer questions; instead, it can be given tasks that take hours, days, or weeks to complete, and then goes off and does those tasks autonomously, in the way a smart employee would, asking for clarification as necessary.

It does not have a physical embodiment (other than living on a computer screen), but it can control existing physical tools, robots, or laboratory equipment through a computer; in theory it could even design robots or equipment for itself to use.

The resources used to train the model can be repurposed to run millions of instances of it (this matches projected cluster sizes by ~2027), and the model can absorb information and generate actions at roughly 10x-100x human speed. It may however be limited by the response time of the physical world or of software it interacts with.

Each of these million copies can act independently on unrelated tasks, or if needed can all work together in the same way humans would collaborate, perhaps with different subpopulations fine-tuned to be especially good at particular tasks.

We could summarize this as a “country of geniuses in a datacenter”.'

https://darioamodei.com/essay/machines-of-loving-grace

z2|18 days ago

I think we still have trouble defining the 'I' part of AGI and the rest is predicted on that definition being objective and concrete first.

co_king_3|18 days ago

It's Nirvana or Heaven for the AI cult.

It's a constantly shifting goalpost. Really it's a just a big lie that says AI will do whatever you can imagine it would.

anematode|18 days ago

> 100% of today’s SWE tasks are done by the models.

Meanwhile, Claude Code is implemented using a React-like framework and has 6000 open issues, many of which are utterly trivial to fix.

ponector|18 days ago

He does not specify if tasks are done correctly. Merge your change request full of ai slop and close the task in jira as done. Voila! Velocity increased to the moon! 6000 or 7000 open issues - who cares?

coffeefirst|18 days ago

I’m honestly trying to understand the state of the art and unfortunately the industry is so grifty it’s hard to tell…

Can I ask what happened with your Claude Code rollout?