I never quite understood why AI and LLMs are marketed the way they are, or why the powers that be behind its massive push seem so keen on selling it as a wholesale replacement for human careers (which given the current curve of improvements despite what the naysayers of human intellect might suggest, is unfeasible).
Accountants didn't die off when calculators came on the scene. In no scenario is an LLM a drop-in replacement for any career field the same way CAD was a drop-in replacement for draftsmen -- and even then, draftsmen are still around today, in slightly smaller numbers, doing CAD drafting and design rather than using raw pen-and-paper skills.
Claude and Codex are exceptionally useful for reducing workload and improving productivity. But that's all they are. They're calculators replacing the slide-rule, drafting-esque drudgery of typing out all your code by hand. So why not market them like that? As helpers, assistants, tools to enable you to do things better and more efficiently? Which, in my usage of them, is what they're really only good at. Instead, there's been a mad rush to shoehorn agents and LLMs and genai into everything, outlandish claims like GPT writing better than Hemingway and Ginsberg, and creating absurd tools like Grok or Sora that are fundamentally broken, don't work well, and have flooded the internet with noise and disgusting slop.
And in all of this, they've created a cancerous gold rush that threatens to wipe out the entire economy when the jig is up and people realize how useless these claims are, and that at the end of the day, it's a fancy search engine, a calculator, that can think a little better and reason more than the ones of old.
It really feels like all of these CEOs are just borderline running a cult at this point.
Often I see youtube videos that sells an overwhelmingly negative take on AI, like "OpenAI" fails 93% of Jobs or "AI is destroying the world" and other weirdly outlandish titles that is clearly aimed at clickbait.
Watching these content I often get confused because it never seems to highlight the actual real world progress and use that LLMs in particular gets for coding.
Much of what was "vibe coding" is becoming just coding now. This means for open source, we are no longer relying on companies that create "opencore" products that nerf/neglect the public version so they can sell their cloud product. We don't have to worry about a maintainer going AWOL on some Clojure or Elixir library and fret about hiring someone who has "20 years of experience". We don't need to pay for a lot of expensive enterprise SaaS tools that charge six digits when we can simply use LLM to internalize existing packages and even create our own.
Those that have been using coding agents for the past 6 month know how much progress there have been and the sheer pace of it to know that we are about to turn the corner, especially as new forms of computing are in the pipelines that will scale even faster without incurring more energy, moving away from text token gen to something else that humans can't read etc.
While it's important to watch different takes, I think someone who consumes only Youtube and these videos that the algorithm is designed to push is going to be shocked and left behind because by the time these videos are produced, things have already progressed or in state of change. All in all, these videos should be treated like ephemeral commentary that ultimately loses their relevance due to the sheer speed of how things are changing.
You make some fair points here, but I'm unclear as to whether you're claiming Geerling's video is an example of an "overwhelmingly negative take on AI".
What kind of open source projects have you been working on? Why do you think most serious open source project reject AI PRs? Do you think you will spin up your own VLC, linux kernel or tensorflow in an afternoon and that you won't need "maintainers"? What about security? Accountability? Group work?
The point of the video is to highlight how the inundation of AI-generated pull requests is harming open source. It doesn't say anything about AI success/failure rates, and it wouldn't make sense for it to go into details about that. However, it does mention that LLMs are useful for some things.
My guess is that this is going to be everything other technology that's democratized. You see a flood of low quality output because you have a lot of new non-technical devs. Some of these are good enough to crowd out some of the prexisting tools. The volume creates noise which also makes the good stuff harder to find. Eventually an ecosystem starts forming around these low hanging products which fill the gaps between pros and amatures (think of what happened to video editing and Apple). Eventually you have more people creating a better product in the long run. There is a bit of a feedback loop here as AI gets better, it makes the products it outputs better, which inturn can benefit AI as it learns from improvements.
The real problem is that AI doesnt make any money. In fact, AI companies and Buisiness units hemmorage cash. When AI is eventually priced to the market cost the use-case for this all collapses.
I wonder if we'll reach a breaking point with public forges, where they'll simply reject hosting a repo if it isn't from someone with a vetted background or if it detects hallmarks of LLM slop (e.g., many commits over a short period of time or other LLM tells).
GitHub recently added new repository settings to turn off pull requests or limit them to approved contributors. The announcement doesn't mention AI agents, but that's certain relevant.
I think there'll be space for curated forges at some point but they're going to live on the margins like most self-hosted repos do.
You could solve it with tech by using ideas from radicle and tangled but the slop is ultimately a social problem, so you just have invite-only forges where the source of the invite is also held accountable (lobsters style).
If you want a high quality internet experience these days you have to step out of the mainstream.
I think that AI will do the vetting of repos - just as humans do that now. Perhaps AI will do a better job. The only way we're gonna fight AI slop is with AI.
Personally I agree with the alternative opinion that it will be a golden age. I'm embarking on a project that involves refactoring something I did 18 years ago. I'm assuming that it'll take 1/10 the time to make a much better modern version with the assistance of LLMs.
You overestimate my ability to keep mental context for 6 months.
And additionally, most of the PRs I have seen reviewed, the quality hasn't really degraded or improved since LLMs have started contributing. I think we have been rubber stamping PRs for quite sometime. Not sure that AI is doing any worse.
The aesthetics of an argument is not the argument.
It's actually sickening that you are defending billionaire's toys, which make work for people already working for free; AIs are constructed from the illegal and unethical expropriation of labor, here and abroad.
Invoking the idea that it is classist or racist to reject yet another transparent power grab by the Epstein class against labor is maximum peasant brain.
OpenClaw Peter is using codex to analyze/de-duplicate PRs, extract good ideas from them and then re-implement them.
> I spun up 50 codex in parallel, let them analyze the PR and generate a JSON report with various signals, comparing with vision, intent (much higher signal than any of the text), risk and various other signals. Then I can ingest all reports into one session and run AI queries/de-dupe/auto-close/merge as needed on it.
Some people bitch, others are real engineers solving novel problems.
I know someone who started making a game by building his own engine. 5 years later he had made half an engine and zero games made on it.
Most of the people I know that are into herding AI spend most of their time doing that, but I can't say I've seen them accomplish much more than other colleagues, even the ones just using built-in AI or copy pasting code from an AI chat.
> Some people bitch, others are real engineers solving novel problems
My most disliked thing about AI so far isn't AI itself, it's how nasty AI evangelists behave when it's criticized. You don't have to attack and/or insult people, you could have just left out that last bit.
pmdr|6 days ago
blibble|6 days ago
no labour? no demand to buy products/services
kunai|6 days ago
Accountants didn't die off when calculators came on the scene. In no scenario is an LLM a drop-in replacement for any career field the same way CAD was a drop-in replacement for draftsmen -- and even then, draftsmen are still around today, in slightly smaller numbers, doing CAD drafting and design rather than using raw pen-and-paper skills.
Claude and Codex are exceptionally useful for reducing workload and improving productivity. But that's all they are. They're calculators replacing the slide-rule, drafting-esque drudgery of typing out all your code by hand. So why not market them like that? As helpers, assistants, tools to enable you to do things better and more efficiently? Which, in my usage of them, is what they're really only good at. Instead, there's been a mad rush to shoehorn agents and LLMs and genai into everything, outlandish claims like GPT writing better than Hemingway and Ginsberg, and creating absurd tools like Grok or Sora that are fundamentally broken, don't work well, and have flooded the internet with noise and disgusting slop.
And in all of this, they've created a cancerous gold rush that threatens to wipe out the entire economy when the jig is up and people realize how useless these claims are, and that at the end of the day, it's a fancy search engine, a calculator, that can think a little better and reason more than the ones of old.
It really feels like all of these CEOs are just borderline running a cult at this point.
agentifysh|6 days ago
Watching these content I often get confused because it never seems to highlight the actual real world progress and use that LLMs in particular gets for coding.
Much of what was "vibe coding" is becoming just coding now. This means for open source, we are no longer relying on companies that create "opencore" products that nerf/neglect the public version so they can sell their cloud product. We don't have to worry about a maintainer going AWOL on some Clojure or Elixir library and fret about hiring someone who has "20 years of experience". We don't need to pay for a lot of expensive enterprise SaaS tools that charge six digits when we can simply use LLM to internalize existing packages and even create our own.
Those that have been using coding agents for the past 6 month know how much progress there have been and the sheer pace of it to know that we are about to turn the corner, especially as new forms of computing are in the pipelines that will scale even faster without incurring more energy, moving away from text token gen to something else that humans can't read etc.
While it's important to watch different takes, I think someone who consumes only Youtube and these videos that the algorithm is designed to push is going to be shocked and left behind because by the time these videos are produced, things have already progressed or in state of change. All in all, these videos should be treated like ephemeral commentary that ultimately loses their relevance due to the sheer speed of how things are changing.
jayrot|6 days ago
If so, my suspicion is that you didn't watch it.
lm28469|6 days ago
saila|6 days ago
nomel|6 days ago
I'm always confused how this isn't ridiculously impressive, "After only 5 years, AI can succeeds at 7% of jobs."
rjakrn|6 days ago
Cool advertisement bro. This is how it must have been when they marketed cigarettes to women to drive up sales.
LurkandComment|6 days ago
wmf|6 days ago
postalrat|6 days ago
josefritzishere|6 days ago
fsflover|5 days ago
ChrisArchitect|6 days ago
offbynull|6 days ago
cpeterso|6 days ago
https://github.com/orgs/community/discussions/187038
ljm|6 days ago
You could solve it with tech by using ideas from radicle and tangled but the slop is ultimately a social problem, so you just have invite-only forges where the source of the invite is also held accountable (lobsters style).
If you want a high quality internet experience these days you have to step out of the mainstream.
intrasight|6 days ago
Florin_Andrei|6 days ago
It's just not clear to me who, or what, will do it.
fsflover|6 days ago
intrasight|6 days ago
snowhale|6 days ago
[deleted]
g947o|6 days ago
Let's see how that's going to work. (It's not going well so far.)
taftster|6 days ago
And additionally, most of the PRs I have seen reviewed, the quality hasn't really degraded or improved since LLMs have started contributing. I think we have been rubber stamping PRs for quite sometime. Not sure that AI is doing any worse.
david_allison|6 days ago
Merging a PR from a non-established contributor is often taking on responsibility for the long-term maintenance of their code.
movedx01|6 days ago
thaumasiotes|6 days ago
benoau|6 days ago
ahaucnx|6 days ago
[deleted]
lofaszvanitt|6 days ago
wheelerwj|6 days ago
Author sounds like a relatively well off white dude in the 1950s.. 60s, 70s, 80s, 90s...
I get it, everything is being massively disrupted right now. I'm not trying to say ai is good or that bad, but the authors argument is weak.
lkey|6 days ago
It's actually sickening that you are defending billionaire's toys, which make work for people already working for free; AIs are constructed from the illegal and unethical expropriation of labor, here and abroad.
Invoking the idea that it is classist or racist to reject yet another transparent power grab by the Epstein class against labor is maximum peasant brain.
dist-epoch|6 days ago
> I spun up 50 codex in parallel, let them analyze the PR and generate a JSON report with various signals, comparing with vision, intent (much higher signal than any of the text), risk and various other signals. Then I can ingest all reports into one session and run AI queries/de-dupe/auto-close/merge as needed on it.
Some people bitch, others are real engineers solving novel problems.
https://x.com/steipete/status/2025591780595429385?s=20
dandellion|6 days ago
Most of the people I know that are into herding AI spend most of their time doing that, but I can't say I've seen them accomplish much more than other colleagues, even the ones just using built-in AI or copy pasting code from an AI chat.
SkyeCA|6 days ago
My most disliked thing about AI so far isn't AI itself, it's how nasty AI evangelists behave when it's criticized. You don't have to attack and/or insult people, you could have just left out that last bit.
Deevian|6 days ago