(no title)
mlinhares | 1 month ago
I think the problem is that people:
* see the hype;
* try to replicate the hype;
* it fails miserably;
* they throw everything away;
I'm on call this week on my job, one of the issues was adding a quick validation (verifying the length of a thing was exactly 15). I could have sat and done that but I just spun an agent, told it where it was, told it how to add the change (we always add feature flags to do that), read the code, prompted it to fix a thing and boom, PR is ready. I wrote 3 paragraphs, didn't have to sit and wait for CI or any of the other bullshit to get it done, focused on more important stuff but still got the fix out.
Don't believe the hype but also don't completely discount the tools, they are incredible help and while they will not boost your productivity by 500%, they're amazing.
pipo234|1 month ago
> * see the hype;
> * try to replicate the hype;
> * it fails miserably;
> * they throw everything away;
I'm sure doing two years of vibecoding is is a considerably more sincere attempt than "trying to replicate the hype and failing at it".
mhast|1 month ago
Claude code (arguably the most recent large change, even if it wasn't the first of it's type) was released one year ago.
After watching the video I'd say that it is similar to my own reaction when opening my own code that is 2 years old. (To be clear, code I myself wrote 2 years ago, without AI.) Or even more realistically, code I wrote 6 months ago.
Buy it mostly reads like someone making an exaggerating claim to get s boost from a populist narrative.
Lerc|1 month ago
>There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.
Just under one year ago.
Aurornis|1 month ago
Different outlets tilt different directions. On HN and some other tech websites it's common to find declarations that LLMs are useless from people who tried the free models on ChatGPT (which isn't the coding model) and jumped to conclusions after the first few issues. On LinkedIn it's common to find influencers who used ChatGPT for a couple things at work and are ready to proclaim it's going to handle everything in the future (including writing the text of their LinkedIn post)
The most useful, accurate, and honest LLM information I've gathered comes from spaces where neither extreme prevails. You have to find people who have put in the time and are realistic about what can and cannot be accomplished. That's when you start learning the techniques for using these tools for maximum effect and where to apply them.
tbagman|1 month ago
Do you have any pointers to good (public) spaces like this? Your take sounds reasonable, and so I'm curious to see that middle-ground expression and discussion.
thesz|1 month ago
"They're perfectly justified: the majority of hot new whatevers do turn out to be a waste of time, and eventually go away. By delaying learning VRML, I avoided having to learn it at all."
bdangubic|1 month ago
This requires the level of professionalism that 97.56% of SWEs do not have
bambax|1 month ago
bambax|1 month ago
Yes.
I use LLMs for coding in the exact opposite way as described in the video. The video says that most people start big, then the LLM fails, then they reduce the scope more and more until they're actually doing most of the work while thinking it's all the machine's work.
I use AI in two ways. With Python I ask it to write micro functions and I do all of the general architecture. This saves a lot of time, but I could do without AI if needed be.
But recently I also started making small C utilities that each do exactly one thing and for those, the LLMs write most if not all of the code. I start very small with a tiny proof of concept and iterate over it, adding functionalities here and there until I'm satisfied. I still inspect the code and suggest refactorizations, or putting things into independent, reusable modules for static linking, etc.
But I'm not a C coder and I couldn't make any of these apps without AI.
Since the beginning of the year, I made four of them. The code is probably subpar but they all work great! and never crash, and I use them every day.
mjevans|1 month ago
nunez|1 month ago
kranner|1 month ago
Instead my productivity would be optimised in service of my employer, while I still had to work on other things, the more important work you cite. It's not like I get to finish work early and have more leisure time.
And that's not to mention, as discussed in the video, what happens if the code turns out to be buggy later. The AI gets the credit, I get the blame.
evnu|1 month ago
Lerc|1 month ago
You should be aiming to use AI in a way that the work it does gives you more time to work on these things.
I can see how people could end up in an environment where management expects AI use is expected to simply increase the speed of exactly what you do right now. That's when people expect the automobile to behave like a faster horse. I do not envy people placed in that position. I don't think that is how AI should be used though.
I have been working on test projects using AI. These are projects where there is essentially no penalty for failure, and I can explore the bounds of what they offer. They are no panacea, people will be writing code for a long while yet, but the bounds of their capability are certainly growing. Working on ideas with them I have been able to think more deeply about what the code was doing and what it was should do. Quite often a lot of the deep thinking in programming is gaining a greater understanding of what the problem really is. You can gain a benefit from using AI to ask for a quick solution simply to get a better understanding of why a naive implementation will not work. You don't need to use any of that code at all, but it can easily show you why something is not as simple as it seems at first glance.
I might post a show HN in a bit of a test project I started over the Christmas break. It's a good example of what I mean. I did it in Claude Artifacts instead of using Claude Code just to see how well I can develop something non-trivial in this manner. There have been certainly been periods of frustration trying to get Claude to understand particular points, but some of those areas of confusion came from my presumptions of what the problem was and how it differed to what the problem actually was. That is exactly the insight that you refer to as the tasty bits.
I think there is some adaptation needed to how you feel about the process of working on a solution. When you are stuck on a problem and are trying things that should make it work, the work can absorb you in the process. AI can diminish this, but I think some of that is precisely because it is giving you more time to think about the hard stuff, and that hard stuff is, well, hard.
ikidd|1 month ago
Now that is some stark truth right there.
bambax|1 month ago
In a sense, AI coding is like using a 3D printer. The machine outputs the final object but you absolutely decides how it will look like, how it will work.
weisnobody|1 month ago
aerhardt|1 month ago
code51|1 month ago
raddan|1 month ago
shuraman7|1 month ago
joenot443|1 month ago
This comment took 15s, typing can be very fast.
catlifeonmars|1 month ago
Was it 3 paragraphs to change a line of code?
I agree the tools are amazing, if you sit back and think about it it’s insane that you can generate code from conversation.
But so far I haven’t found the tools to be amazingly productive for me for small things like that. It’s usually faster for me to just write the thing directly than to have a conversation, and maybe iterate once or twice. Something that takes 5 minutes just turns into 15 minutes and I still need to be in the loop. If I still need to read the code anyway it’s not effective use of my time.
Now what I have found incredibly productive is LLM assisted code completions in read-eval-print loop. That and helping to generate inline documentation.
mlinhares|1 month ago
returnInfinity|1 month ago
theglenn88_|1 month ago
You need to pick and choose when to use them, it largely depends on the task.
I use AI a lot now for programming and spend more time code reviewing, sometimes though, I use my hands.
My nimber one rule is, don't try to do too much in one prompt.
doug_durham|1 month ago
arational|1 month ago
servercobra|1 month ago
nicoburns|1 month ago
You run CI on human-generated PRs, but not AI-generated PRs? Why would there be a difference in policy there?
mlinhares|1 month ago
bsoles|1 month ago