(no title)
deathanatos | 18 days ago
Nobody. Nobody disagrees, there is zero disagreement, there is no war in Ba Sing Se.
> 100% of today’s SWE tasks are done by the models.
Thank God, maybe I can go lie in the sun then instead of having to solve everyone's problems with ancient tech that I wonder why humanity is even still using.
Oh, no? I'm still untying corporate Gordian knots?
> There is no reason why a developer at a large enterprise should not be adopting Claude Code as quickly as an individual developer or developer at a startup.
My company tried this, then quickly stopped: $$$
dang|17 days ago
You may not owe AGI enthusiasts better, but you owe this community better if you're participating in it.
deathanatos|17 days ago
These posts are so tiring. The statement is an outright and blatant lie, because it's grift. The grifter wants to silence dissent by rendering it "non-existent", so that the grift can take the position of being a foregone conclusion. There is no dissent. The statement is outrageous, given the obvious amount of dissent in the comments, and the positive reaction of my fellow commenters to it. "AI built a browser from scratch." It did not. "AI built a compiler." It can't compile hello world. "AGI is coming & nobody disagrees." But the truth takes its time getting its shoes on while a lie already spread across the world.
It's doubly tiring since I (and I suspect, many of this) are having AI stuffed down our gullets by our respective management chains. Any honest evaluation of AI comes to the result that it's nowhere near capable, routinely misses the mark, and probably takes more time to verify its answer than it does to use. But I suspect many people are just skipping the verification step.
& it's disappointing to see low-quality articles like this make it, time and again, and it feels like thoughtful discussion no longer moves minds these days.
I'll try to express this without the snark going forward, though.
stego-tech|18 days ago
This captures my chief irk over these sorts of "interviews" and AI boosterism quite nicely.
Assume they're being 100% honest that they genuinely believe nobody disagrees with their statement. That leaves one of two possible outcomes:
1) They have not ingested data from beyond their narrow echo chamber that could challenge their perceptions, revealing an irresponsible, nay, negligent amount of ignorance for people in positions of authority or power
OR
2) They do not see their opponents as people.
Like, that's it. They're either ignorant or they view their opposition as subhuman. There is no gray area here, and it's why I get riled up when they're allowed to speak unchallenged at length like this. Genuinely good ideas don't need this much defense, and genuinely useful technologies don't need to be forced down throats.
semiinfinitely|18 days ago
this third option seems like the most reasonable option here? the way you worded this makes it seems like there are only these two options to reach your absurd conclusion
> like thats it
> There is no gray area here
re-examine your assumptions
SpicyLemonZest|17 days ago
palmotea|18 days ago
> Like, that's it. They're either ignorant or they view their opposition as subhuman.
I'm going to go a bit off topic, but tech people often just inhale sci-fi, and I think we ought to reckon the problems with that, especially when tech people get into position of power.
Take Dune, for instance. Everyone know Vladimir Harkonnen is a bad guy, but even the good-guy Atreides seem to be spending their time fighting and assassinating, Paul's jihad kills 60 billion people, and Leto II is a totalitarian tyrant. It's all elite power-and-dominance shit, not even the protagonists are good people when you think about it. Regular people merit barely a mention, and are just fodder.
Often the people are cardboard, and it's the (fantasy) tech and the "world building" that are the focus.
It doesn't seem like it'd be good influence on someone's worldview, especially when not balanced sufficiently by other influences.
co_king_3|18 days ago
You hit the nail on their head.
They go out of their way to call you an "AI bot" if you say something that contradicts their delusional world view.
piva00|18 days ago
How much were devs spending to become a sticking point?
I'm asking because I thought it'd be extremely expensive when it rolled out at the company I work for, we have dashboards tracking expenses averaged per dev in each org layer, the most expensive usage is about US$ 350/month/dev, the average hovers around US$ 30-50.
It's much cheaper than I expected.
Tenoke|18 days ago
cosmic_cheese|18 days ago
ryandvm|18 days ago
The rest of us believe that the human brain is pretty much just a meat computer that differs from lower life forms mostly quantitatively. If that's the case, then there really isn't much reason to believe we can't do exactly what nature did and just keep scaling shit up until it's smart.
roxolotl|18 days ago
viking123|18 days ago
yolo3000|18 days ago
gas9S9zw3P9c|18 days ago
wrs|18 days ago
reducesuffering|18 days ago
'By powerful AI [he dislikes the baggage of AGI, but means the same], I have in mind an AI model—likely similar to today’s LLMs in form, though it might be based on a different architecture, might involve several interacting models, and might be trained differently—with the following properties:
In terms of pure intelligence, it is smarter than a Nobel Prize winner across most relevant fields – biology, programming, math, engineering, writing, etc. This means it can prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc.
In addition to just being a “smart thing you talk to”, it has all the “interfaces” available to a human working virtually, including text, audio, video, mouse and keyboard control, and internet access. It can engage in any actions, communications, or remote operations enabled by this interface, including taking actions on the internet, taking or giving directions to humans, ordering materials, directing experiments, watching videos, making videos, and so on. It does all of these tasks with, again, a skill exceeding that of the most capable humans in the world.
It does not just passively answer questions; instead, it can be given tasks that take hours, days, or weeks to complete, and then goes off and does those tasks autonomously, in the way a smart employee would, asking for clarification as necessary.
It does not have a physical embodiment (other than living on a computer screen), but it can control existing physical tools, robots, or laboratory equipment through a computer; in theory it could even design robots or equipment for itself to use.
The resources used to train the model can be repurposed to run millions of instances of it (this matches projected cluster sizes by ~2027), and the model can absorb information and generate actions at roughly 10x-100x human speed. It may however be limited by the response time of the physical world or of software it interacts with.
Each of these million copies can act independently on unrelated tasks, or if needed can all work together in the same way humans would collaborate, perhaps with different subpopulations fine-tuned to be especially good at particular tasks.
We could summarize this as a “country of geniuses in a datacenter”.'
https://darioamodei.com/essay/machines-of-loving-grace
z2|18 days ago
unknown|18 days ago
[deleted]
co_king_3|18 days ago
It's a constantly shifting goalpost. Really it's a just a big lie that says AI will do whatever you can imagine it would.
anematode|18 days ago
Meanwhile, Claude Code is implemented using a React-like framework and has 6000 open issues, many of which are utterly trivial to fix.
ponector|18 days ago
coffeefirst|18 days ago
Can I ask what happened with your Claude Code rollout?