(no title)
systemf_omega | 1 month ago
The way I see it, I can just start using AI once they get good enough for my type of work. Until then I'm continuing to learn instead of letting my brain atrophy.
systemf_omega | 1 month ago
The way I see it, I can just start using AI once they get good enough for my type of work. Until then I'm continuing to learn instead of letting my brain atrophy.
simonw|1 month ago
I don't think that's true.
I'm really good at getting great results out of coding agents and LLMs. I've also been using LLMs for code on an almost daily basis since ChatGPT's release on November 30th 2022. That's more than three years ago now.
Meanwhile I see a constant flow of complaints from other developers who can't get anything useful out of these machines, or find that the gains they get are minimal at best.
Using this stuff well is a deep topic. These things can be applied in so many different ways, and to so many different projects. The best asset you can develop is an intuition for what works and what doesn't, and getting that intuition requires months if not years of personal experimentation.
I don't think you can just catch up in a few weeks, and I do think that the risk of falling behind isn't being taken seriously enough by much of the developer population.
I'm glad to see people like antirez ringing the alarm bell about this - it's not going to be a popular position but it needs to be said!
tymscar|1 month ago
Also, Simon, with all due respect, and I mean it, I genuinely look in awe at the amount of posts you have on your blog and your dedication, but it’s clear to anyone that the projects you created and launched before 2022 far exceed anything you’ve done since. And I will be the first to say that I don’t think that’s because of LLMs not being able to help you. But I do think it’s because what makes you really, really good at engineering you kept replacing slowly but surely with LLMs more and more by the month.
If I look at Django, I can clearly see your intelligence, passion, and expertise there. Do you feel that any of the projects you’ve written since LLMs are the main thing you focus on are similar?
Think about it this way: 100% of you wins against 100% of me any day. 100% of Claude running on your computer is the same as 100% of Claude running on mine. 95% of Claude and 5% of you, while still better than me (and your average Joe), is nowhere near the same jump from 95% Claude and 5% me.
I do worry when I see great programmers like you diluting their work.
Humorist2290|1 month ago
There's a learning curve to any toolset, and it may be that using coding agents effectively is more than a few weeks of upskilling. It may be, and likely will be, that people make their whole careers about being experts on this topic.
But it's still a statistical text prediction model, wrapped in fancy gimmicks, sold at a loss by mostly bad faith actors, and very far from its final form. People waiting to get on the bandwagon could well be waiting to pick up the pieces once it collapses.
systemf_omega|1 month ago
Just like the stuff LLMs are being used for today. Why wouldn't "using LLMs well" be not just one of the many things LLMs will simplify too?
Or do you believe your type of knowledge is somehow special and is resistant to being vastly simplified or even made obsolete by AI?
csmpltn|1 month ago
coffeemug|1 month ago
lunar_mycroft|1 month ago
Maybe AI gets good enough at writing code that it's users' knowledge of computer science and software development becomes irrelevant. In that case, approximately everyone on this site is just screwed. We're all in the business of selling that specialized knowledge, and if it's no longer required then companies aren't going to pay us to operate the AI, they're going to pay PMs, middle managers, executives, etc. But even that won't be particularly workable long term, because all their customers will realize they no longer need to pay the companies for software either. In this world, the price of software goes to zero (and hosting likely gets significantly more commoditized than it is now). Any time you put into learning to use LLMs for software development doesn't help you keep making money selling software, and actually stops you from picking up a new career.
If, on the other hand, CS and software engineering knowledge is still needed, companies will have to keep/restart hiring or training new developers. In terms of experience using AI, it is impossible for anyone to have less experience than these new developers. We will, however, have much more experience and knowledge of the aforementioned non-LLM skills that we're assuming (in this scenario) are still necessary for the job. In this scenario you might be better off if you'd started learning to prompt a bit earlier, but you'll still be fine if you didn't.
elvis10ten|1 month ago
This has been my experience. When something gets good enough, someone will create some really good resource on it. Allowing the dust to settle, to me is a more efficient strategy than constantly trying to “keep up”. Maybe also not waiting too long to do so.
This wouldn’t work of course if a person was trying to be some AI thought leader.
grayhatter|1 month ago
What's the impressive thing that can convince me it's equivalent, or better than anything created before, or without it?
I understand you've produced a lot of things, and that your clout (which depends on the AI ferver) is based largely because of how refined a workflow you've invented. But I want to see the product, rather than the hype.
Make me say; I wish I was good enough to create this!
Without that, all I can see is the cost, or the negative impact.
edit: I've read some of your other posts, and for my question, I'd like to encourage you to pick only one. Don't use the scatter shot approach that LLMs love, giving plenty of examples, hoping I'll ignore the noise for the single that sounds interesting.
Pick only one. What project have you created that you're truly proud of?
I'll go first, (even though it's unfinished): Verse
Mawr|1 month ago
stefanlindbohm|1 month ago
From where I’m standing, I don’t see any massive difference on overall productivity between anyone all in on vibe coding than those who aren’t. There’s not more features, higher quality, etc from teams/companies out there than before on any high-level metrics/observations. Maybe it will come, but there’s also no evidence it will.
I do, however, see great gains within certain specific tasks using LLM’s. Smaller scope code gen, rubber ducking, etc. But this seems much less difficult to get good at using (and I hope for tooling that help facilitate the specific types of use cases) and on the whole amounts to marginal gains. It seems fine to be a few years late to catch up, worst case.
camel-cdr|1 month ago
rubslopes|1 month ago
jeroenhd|1 month ago
The intuition just doesn't hold. The LLM gets trained and retrained by other LLM users so what works for me suddenly changes when the LLM models refresh.
LLMs have only gotten easier to learn and catch up on over the years. In fact, most LLM companies seem to optimise for getting started quickly over getting good results consistently. There may come a moment when the foundations solidify and not bothering with LLMs may put you behind the curve, but we're not there yet, and with the literally impossible funding and resources OpenAI is claiming they need, it may never come.
mmcnl|1 month ago
furyofantares|1 month ago
It is just way easier for someone to get up to speed today than it was a year ago. Partly because capabilities have gotten better and much of what was learned 6+ months ago no longer needs to be learned. But also partly because there is just much more information out there about how to get good results, you might have coworkers or friends you can talk to who have gotten good results, you can read comments on HN or blog posts from people who have gotten good results, etc.
I mean, ok, I don't think someone can fully catch up in a few weeks. I'll grant that for sure. But I think they can get up to speed much faster than they could have a year ago.
Of course, they will have to put in the effort at that time. And people who have been putting it off may be less likely to ever do that. So I think people will get left behind. But I think the alarm to raise is more, "hey, it's a deep topic and you're going to have to put in the effort" rather than "you better start now or else it's gonna be too late".
matsemann|1 month ago
Or would you say people shouldn't learn Django now? As it's useless as they're already far behind? They shouldn't study computer science, as it will be too late?
Every profession have new people continuously entering the workforce, that quickly get up to speed on whatever is in vogue.
Honestly, what you've spent years learning and experimenting with, someone else will be able to learn in months. People will figure out the best ways of using these tools after lots of attempts, and that distilled knowledge will be transferred quickly to others. This is surely painful to hear for those having spent years in the trenches, and is perhaps why you refuse to acknowledge it, but I think it's true.
febusravenga|1 month ago
You're basically saying that using LLMs is like using magic. Telling people to use intuition is basically telling that i don't know how it works and why, but works for me sometimes.
That's why we programmers hate it - we have safe space where there's no intuition - namely programming languages & runtimes with deterministic behavior. And we're shoehorned back into mess of magic/intuition and wishfullthinking.
(yes, i try llm, i have some results, i'm frustrated mostly by people AI-slopping _everything_ around me)
unknown|1 month ago
[deleted]
hollowturtle|1 month ago
nitwit005|1 month ago
It might be now, but the intent of these tools is clearly not to have to learn a bunch of work arounds to get the tool to do what you want.
If these tools do improve, that inefficiency would presumably reduce, or go away entirely, which means you wouldn't see an advantage to your head start.
biophysboy|1 month ago
tehnub|1 month ago
noosphr|1 month ago
The pro AI people don't understand what quadratic attention means and the anti-ai people don't understand how much information can be contained in a tb of weights.
At the end of the day both will be hugely disappointed.
>The best asset you can develop is an intuition for what works and what doesn't, and getting that intuition requires months if not years of personal experimentation.
Intuition does not translate between models. Whatever you think dense llms were good at deepseek completely upended it in an afternoon. The difference between major revisions of model families is substantial enough that intuition is a drawback not an asset.
water-drummer|1 month ago
You feel that way because it took you years or months to reach that point. But after reaching that point, do you really think that it's equally—if not more—difficult to put what you learned into words compared to, let's say, programming or engineering?
See, the thing about these tools is that they're designed to be operated via natural language, which is something most people (with a certain level of education) are quite comparable to each other at; consequently, the skill ceiling is considerably lower compared to something like programming. I am not saying there's no variance in people's ability to articulate, but that the variance is considerably less than what we get when comparing people's ability to write code or solve engineering problems.
So, whatever you learned by trial and error was just different ways or methods to get around the imperfections of the existing LLMs—not ways to use them skillfully according to their design goals. Their design goal is to achieve whatever task is given to them, as long as the intent is clear. These workarounds and tricks that you learned aren't something you build an intuition for. What you build an intuition for is finding new workarounds, but once you've found them, they're quite concrete and easy to describe to someone else who can simply use them to achieve the same results as you.
Tools that are designed to be operable via natural language aren't designed to be more thorough—it's actually the opposite. If you want more control, you have programming languages and search engines; thoroughness is where you get that high skill ceiling. The skill ceiling for using these tools is going to get narrower and narrower. The workarounds that you figure out may take skill to discover, but they don't take much skill to replicate.
If you share your "tips and tricks" with someone, then yeah, it will take them a week to start getting the same results as you because the skill ceiling is low and the workarounds are concrete/require less thinking.
noodletheworld|1 month ago
This is nonsense.
This field moves so fast the things you did more than a year ago aren't relevant anymore.
Claude code came out last year.
Anyone using random shit from before that is not using it any more. It is completely obsolete in all but a handful of cases.
To make matters worse “intuition” about models is wasted learning, because they change, significantly, often.
Stop spreading FUD.
You can be significantly less harmful to people who are trying to learn by sharing what you actually do instead of nebulously hand waving about magical BS.
Dear readers: ignore this irritating post.
Go and watch Armin Ronacher on youtube if you want to see what a real developer doing this looks like, and why its hard.
yeasku|1 month ago
[deleted]
quitit|1 month ago
You'd be sage with your time just to keep a high-level view until workflows become stable and aren't advancing every few months.
The time to consider mastering a workflow is when a casual user of the "next release" wouldn't trivially supersede your capabilities.
Similarly we're still in the race to produce a "good enough" GenAI, so there isn't value in mastering anything right now unless you've already got a commercial need for it.
This all reminds me of a time when people were putting in serious effort to learn Palm Pilot's Graffiti handwriting recognition, only for the skill to be made redundant even before they were proficient at it.
antirez|1 month ago
embedding-shape|1 month ago
This is true, but still shocking. Professional (working with others at least) developers basically live or die by their ability to communicate. If you're bad at communication, your entire team (and yourself) suffer, yet it seems like the "lone ranger" type of programmer is still somewhat praised and idealized. When trying to help some programmer friends with how they use LLMs, it becomes really clear how little they actually can communicate, and for some of them I'm slightly surprised they've been able to work with others at all.
An example the other day, some friend complained that the LLM they worked with was using the wrong library, and using the wrong color for some element, and surprised that the LLM wouldn't know it from the get go. Reading through the prompt, they never mentioned it once, and when asked about it, they thought "it should have been obvious" which yeah, to someone like you who worked for 2 years on this project that might be obvious, but for some with zero history and zero context about what you do? How you expect it to know this? Baffling sometimes.
menaerus|1 month ago
The world changed for good and we will need to adapt. The bigger and more important question at this point isn't anymore if LLMs are good enough, for the ones who want to see, but, as you mention in your article, is what will happen to people who will get unemployed. There's a reality check for all of us.
oncallthrow|1 month ago
Learning all of the advanced multi-agent worklows etc. etc... Maybe that gets you an extra 20%, but it costs a lot more time, and is more likely to change over time anyway. So maybe not very good ROI.
__MatrixMan__|1 month ago
theshrike79|1 month ago
2. Build tools for the LLM, ones that are easy to use and don't spam stuff. Like give it tools to run tests that only return "Tests OK" if nothing failed, same with builds.
3. Look into /commands and Skills, both seem to be here to stay
Maybe a weekend of messing about and you'll be pretty well off compared to the vast masses who still copy/paste code out of ChatGPT to their editor.
unknown|1 month ago
[deleted]
edg5000|1 month ago
I've learned a lot of new things this year thanks to AI. It's true that the low levels skills with atrophy. The high level skills will grow though; my learning rate is the same, just at a much higher abstraction level; thus covering more subjects.
The main concern is the centralisation. The value I can get out of this thing currently well exceeds my income. AI companies are buying up all the chips. I worry we'll get something like the housing market where AI will be about 50% of our income.
We have to fight this centralisation at all costs!
wmwragg|1 month ago
nebula8804|1 month ago
Guess we are still in the 1970s era of AI computing. We need to hope for a few more step changes or some breakthrough on model size.
epolanski|1 month ago
In fact, I'd say I code even better since I started doing one hour per day of a mixture of fun coding and algo quizzes while at work I mostly focus on writing a requirements plan and implementation plan later and then letting the AI cook while I review all the output multiple times from multiple angles.
iLoveOncall|1 month ago
[deleted]
_ea1k|1 month ago
The most advanced tooling today looks nothing like the tooling for writing software 3 years ago. We've got multi-agent orchestration with built in task and issue tracking, context management, and subagents now. There's a steep learning curve!
I'm not saying that everyone has to do it, as the tools are so nascent, but I think it is worthwhile to at least start understanding what the state of the art will look like in 12-24 months.
CuriouslyC|1 month ago
Ekaros|1 month ago
simonw|1 month ago
One of the key skills needed in working with LLMs is learning to ignore the hype and marketing and figure out what these things are actually capable of, as opposed to LinkedIn bluster and claims from CEOs who's net worth are tied to investor sentiment in their companies.
If someone spends more time talking about "AGI" then what they're actually building, filter that person out.
theshrike79|1 month ago
If so, I've got a JPEG of a monkey to sell you =)
dkdcio|1 month ago
zahlman|1 month ago
It's not that different overall, I suppose, from the loop of thinking of an idea and then implementing it and running tests; but potentially very disorienting for some.
epolanski|1 month ago
- find information about APIs without needing to open a browser
- writing a plan for your business-logic changes or having it reviewed
- getting a review of your code to find edge cases, potential security issues, potential improvements
- finding information and connecting the dots of where, what and why it works in some way in your code base?
Even without letting AI author a single line of code (where it can still be super useful) there are still major uses for AI.
xboxnolifes|1 month ago
Part of the problem with things that iterate quickly is that iterations tend to reference previous versions. So, you try learning the new hotness (v261), but there are implied references to v254, v239, and v198. Then you realize, v1, v5, v48, v87, v138, v192, and v230 have cute identifiers that you aren't familiar with and are never explained anywhere. New concepts get introduced in v25, v50, v102, and v156 that later became foundational knowledge that is assumed to be understood by the reader and is never explained anywhere.
So, if you feel confident something will be the next hotness, it's usually best to be an early adopter, so you gain your knowledge slowly over years instead of having to cram when you need to pick it up.
rvz|1 month ago
The ones pushing this narrative have either the following:
* Invested in AI companies (which they will never disclose until they IPO / acquired)
* Employees at AI companies that have stock options which they are effectively paid boosters around AGI nonsense.
* Mid-life crisis / paranoia that their identity as a programmer is being eroded and have to pivot to AI.
It is no different to the crypto web3 bubble of 2021. This time, it is even more obvious and now the grifters from crypto / tech are already "pivoting to ai". [0]
[0] https://pivot-to-ai.com/
KaiserPro|1 month ago
> It is no different to the crypto web3 bubble of 2021
web3 didn't produce anything useful, just noise. I couldn't take a web3 stack to make an arbitrary app. with the PISS machine I can.
Do I worry about the future, fuck yeah I do. I think I'm up shit creek. I am lucky that I am good at describing in plain English what I want.
menaerus|1 month ago
llmslave3|1 month ago
[deleted]
nikcub|1 month ago
I don't think it's a coincidence that some of the best developers[1] are using these tools and some openly advocating for them because it still requires core skills to get the most out of them
I can honestly say that building end-to-end products with claude code has made me a better developer, product designer, tester, code reviewer, systems architect, project manager, sysadmin etc. I've learned more in the past ~year than I ever have in my career.
[0] abandoned cursor late last year
[1] see Linus using antigravity, antirez in OP, Jared at bun, Charlie at uv/ruff, mitushiko, simonw et al
dkdcio|1 month ago
(I had been using GitHub Copilot for 5+ years already, started as an early beta tested, but I don’t really consider that the same)
I like to say it’s like learning a programming language. it takes time, but you start pattern matching and knowing what works. it took me multiple attempts and a good amount of time to learn Rust, learning effective use of these tools is similar
I’ve also learned a ton across domains I otherwise wouldn’t have touched
nicce|1 month ago
Replace that with anything and you will notice that people who are building startups in this area will want to bring the narrative like that as it usually highly increases the value of their companies. When narrative gets big enough, then big companies must follow - or they look like "lagging behind". Whether the current thing brings value or not. It is a fire that keeps feeding itself. In the end, when it gets big enough - we call it as bubble. Bubble that may explode. Or not.
Whether the end user gets actual value or not, is just side effect. But everyone wants to believe that that it brings value - otherwise they were foolish to jump in the train.
bsaul|1 month ago
For now i think people can still catch up quickly, but at the end of 2026 it's probably going to be a different story.
Avshalom|1 month ago
rvz|1 month ago
Ah yes, an ecosystem that is fundamentally inherently built on probabilisitic quick sand and even with the "best prompting practices", you still get agents violating the basics of security and committing API keys when they were told not to. [0]
[0] https://xcancel.com/valigo/status/2009764793251664279
edg5000|1 month ago
Can you elaborate? Skill in AI use will be a differentiator?
__MatrixMan__|1 month ago
kahrl|1 month ago
unknown|1 month ago
[deleted]