(no title)
micw | 27 days ago
E.g. I'm a software architect and developer for many years. So I know already how to build software but I'm not familiar with every language or framework. AI enabled me to write other kind of software I never learned or had time for. E.g. I recently re-implemented an android widget that has not been updated for a decade by it's original author. Or I fixed a bug in a linux scanner driver. None of these I could have done properly (within an acceptable time frame) without AI. But also none of there I could have done properly without my knowledge and experience, even with AI.
Same for daily tasks at work. AI makes me faster here, but also makes me doing more. Implement tests for all edge cases? Sure, always, I saved the time before. More code reviews. More documentation. Better quality in the same (always limited) time.
mirsadm|27 days ago
physicsguy|27 days ago
ozlikethewizard|27 days ago
Sammi|26 days ago
Now you don't have to pay a lot of money to get a mediocre solution that works.
All those things that are broken, but you don't have time or money for them, you can have them fixed now.
xgb84j|27 days ago
bonoboTP|27 days ago
embedding-shape|27 days ago
bandrami|27 days ago
perrygeo|26 days ago
On complex topics where I know what I'm talking about, model output contains so much garbage with incorrect assumptions.
But complex topics where I'm out of my element, the output always sounds strangely plausible.
This phenomenon writ large is terrifying.
joshbee|27 days ago
I've found giving the LLMs the input and output interfaces really help keep them on rails, while still being involved in the overall process without just blindly "vibe coding."
Having the AI also help with unit tests around business logic has been super helpful in addition to manual testing like normal. It feels like our overall velocity and code quality has been going up regardless of what some of these articles are saying.
rustyhancock|27 days ago
I agree, I write out the sketch of what I want. With a recent embedded project in C I gave it a list of function signatures and high level description and was very satisfied with what it produced. It would have taken me days to nail down the particulars of the HAL (like what kind of sleep do I want what precisely is the way to setup the WDT and ports).
I think it's also language dependent.
I imagine JavaScript can be a crap shoot. The language is too forgiving.
Rust is where I have had most success. That is likely a personal skill issue, I know we want a Arc<DashMap>, will I remember all the foibles of accessing it? No.
But given the rigidity of the compiler and strong typing I can focus on what the code functionally is doing, that in happy with the shape/interface and function signature and the compiler is happy with the code.
It's quite fast work. It lets me use my high level skills without my lower level skills getting in the way.
And id rather rewrite the code at a mid-level then start it fresh, and agree with others once it's a large code base then in too far behind in understanding the overall system to easily work on it. That's true of human products too - someone elses code always gives me the ick.
jinhkuan|26 days ago
wondering what sort of artifacts beyond ADR/natural language prompts help steer LLMs to do the right thing
netdevphoenix|27 days ago
There are some things here that folks making statements like yours often omit and it makes me very sus about your (over)confidence. Mostly these statements talk in a business short-term results oriented mode without mentioning any introspective gains (see empirically supported understanding) or long-term gains (do you feel confident now in making further changes _without_ the AI now that you have gained new knowledge?).
1. Are you 100% sure your code changes didn't introduce unexpected bugs?
1a. If they did, would you be able to tell if they where behaviour bugs (ie. no crashing or exceptions thrown) without the AI?
2. Did you understand why the bug was happening without the AI giving you an explanation?
2a. If you didn't, did you empirically test the AI's explanation before applying the code change?
3. Has fixing the bug improved your understanding of the driver behaviour beyond what the AI told you?
3a. Have you independently verified your gained understanding or did you assume that your new views on its behaviour are axiomatically true?
Ultimately, there are 2 things here: one is understanding the code change (why it is needed, why that particular change implementation is better relative to others, what future improvements could be made to that change implementation in the future) and skill (has this experience boosted your OWN ability in this particular area? in other words, could you make further changes WITHOUT using the AI?).
This reminds me of people that get high and believe they have discovered these amazing truths. Because they FEEL it not because they have actual evidence. When asked to write down these amazing truths while high, all you get in the notes are meaningless words. While these assistants are more amenable to get empirically tested, I don't believe most of the AI hypers (including you in that category) are actually approaching this with the rigour that it entails. It is likely why people often think that none of you (people writing software for a living) are experienced in or qualified to understand and apply scientific principles to build software.
Arguably, AI hypers should lead with data not with anecdotal evidence. For all the grandiose claims, the lack of empirical data obtained under controlled conditions on this particular matter is conspicuous by its absence.
jacquesm|27 days ago
I've been playing with various AI tools and homebrew setups for a long time now and while I see the occasional advantage it isn't nearly as much of a revolution as I've been led to believe by a number of the ardent AI proponents here.
This is starting to get into 'true believer' territory: you get these two camps 'for and against' whereas the best way forward is to insist on data rather than anecdotes.
AI has served me well, no doubt about that. But it certainly isn't a passe-partout and the number of times it has caused gross waste of time because it insisted on chasing some rabbit simply because it was familiar with the rabbit adds up to a considerable loss in productivity.
The scientific principle is a very powerful tool in such situations and anybody insisting on it should be applauded. It separates fact from fiction and allows us to make impartial and non-emotional evaluations of both theories and technologies.
micw|27 days ago
> Are you 100% sure your code changes didn't introduce unexpected bugs?
Who is this ever? But I do code reviews and I usually generate a bunch of tests along with my PRs (if the project has at lease _some_ test infrastructure).
Same applies for the rest of the points. But that's only _my_ way to do these things. I can imagine that others do it a different way and that the points above are more problematic then.
KptMarchewa|27 days ago
mlrtime|27 days ago
How often have you written code and been 100% your code didn't introduce ANY bugs?
Seriously, for most of the code out there who cares? If it's in a private or even public repo, it doesn't matter.
aaaasmile|26 days ago
This reminds me of people who get sad when they realize they haven’t discovered anything amazing.
I am pedantic and “people that” → “people who” (for people, who is preferred).
ivell|27 days ago
I see it empowering to build custom tooling which need not be a high quality maintenance project.
trcf23|27 days ago
So I’m not sure a study from 2024 or impact on code produced during 2024 2025 can be used to judge current ai coding possibilities.
jacomoRodriguez|27 days ago
varjag|27 days ago
viraptor|27 days ago
Even that could use some nuance. I'm generating presentations in interactive JS. If they work, they work - that's the result, and I extremely don't care about the details for this use case. Nobody needs to maintain them, nobody cares about the source. There's no need for "properly" in this case.
samusiam|26 days ago
kilninvar|27 days ago
Contrast this to something you do know but can't be arsed to make; you can keep re-rolling a design until you get something you know and can confirm works. Perfect, time saved.