(no title)
davnicwil | 1 month ago
The details are what stops it from working in every form it's been tried.
You cannot escape the details. You must engage with them and solve them directly, meticulously. It's messy, it's extremely complicated and it's just plain hard.
There is no level of abstraction that saves you from this, because the last level is simply things happening in the world in the way you want them to, and it's really really complicated to engineer that to happen.
I think this is evident by looking at the extreme case. There are plenty of companies with software engineers who truly can turn instructions articulated in plain language into software. But you see lots of these not being successful for the simple reason that those providing the instructions are not sufficiently engaged with the detail, or have the detail wrong. Conversely, for the most successful companies the opposite is true.
libraryofbabel|1 month ago
Going back and forth on the detail in requirements and mapping it to the details of technical implementation (and then dealing with the endless emergent details of actually running the thing in production on real hardware on the real internet with real messy users actually using it) is 90% of what’s hard about professional software engineering.
It’s also what separates professional engineering from things like the toy leetcode problems on a whiteboard that many of us love to hate. Those are hard in a different way, but LLMs can do them on their own better than humans now. Not so for the other stuff.
[0] http://johnsalvatier.org/blog/2017/reality-has-a-surprising-...
godelski|1 month ago
This repeats over and over. There are no big problems, there are only a bunch of little problems that accumulate. As engineers, scientists, researchers, etc our literal job is to break down problems into many smaller problems and then solve them one at a time. And again, we only solve them to the good enough level, as perfection doesn't exist. The problems we solve never were a single problem, but many many smaller ones.
I think the problem is we want to avoid depth. It's difficult! It's frustrating. It would be great if depth were never needed. But everything is simple until you actually have to deal with it.
lkuty|1 month ago
CPLX|1 month ago
The first thing that comes to mind when I see this as a counterargument is that I've quite successfully built enormous amounts of completely functional digital products without ever mastering any of the details that I figured I would have to master when I started creating my first programs in the late 80s or early 90s.
When I first started, it was a lot about procedural thinking, like BASIC goto X, looping, if-then statements, and that kind of thing. That seemed like an abstraction compared to just assembly code, which, if you were into video games, was what real video game people were doing. At the time, we weren't that many layers away from the ones and zeros.
It's been a long march since then. What I do now is still sort of shockingly "easy" to me sometimes when I think about that context. I remember being in a band and spending a few weeks trying to build a website that sold CDs via credit card, and trying to unravel how cgi-bin worked using a 300 page book I had bought and all that. Today a problem like that is so trivial as to be a joke.
Reality hasn't gotten any less detailed. I just don't have to deal with it any more.
Of course, the standards have gone up. And that's likely what's gonna happen here. The standards are going to go way up. You used to be able to make a living just launching a website to sell something on the internet that people weren't selling on the internet yet. Around 1999 or so I remember friend of mine built a website to sell stereo stuff. He would just go down to the store in New York, buy it, and mail it to whoever bought it. Made a killing for a while. It was ridiculously easy if you knew how to do it. But most people didn't know how to do it.
Now you can make a living pretty "easily" selling a SaaS service that connects one business process to another, or integrates some workflow. What's going to happen to those companies now is left as an exercise for the reader.
I don't think there's any question that there will still be people building software, making judgment calls, and grappling with all the complexity and detail. But the standards are going to be unrecognizable.
bryanrasmussen|1 month ago
calenti|1 month ago
fragmede|1 month ago
TeMPOraL|1 month ago
I see no reason why this wouldn't be achievable. Having lived most of my life in the land of details, country of software development, I'm acutely aware 90% of effort goes into giving precise answers to irrelevant questions. In almost all problems I've worked on, whether at tactical or strategic scale, there's either a single family of answers, or a broad class of different ones. However, no programming language supports the notion of "just do the usual" or "I don't care, pick whatever, we can revisit the topic once the choice matters". Either way, I'm forced to pick and spell out a concrete answer myself, by hand. Fortunately, LLMs are slowly starting to help with that.
mkleczek|1 month ago
In other words, it all looks easy in hindsight only.
yunohn|1 month ago
Programming languages already take lots of decisions implicitly and explicitly on one’s behalf. But there are way more details of course, which are then handled by frameworks, libraries, etc. Surely at some point, one has to take a decision? Your underlying point is about avoiding boilerplate, and LLMs definitely help with that already - to a larger extent than cookie cutter repos, but none of them can solve IRL details that are found through rigorous understanding of the problem and exploration via user interviews, business challenges, etc.
godelski|1 month ago
You can't just know right off the back. Doing so contradicts the premise. You cannot determine if a detail isn't important unless you get detailed. If you only care about a few grains of sand in a bucket you still have to search through a bucket of sand for those few grains
popalchemist|1 month ago
bandrami|1 month ago
geon|1 month ago
bryanrasmussen|1 month ago
OK, for me it is the last 10% that is of any interest whatsoever. And I think that has been the case with any developer I've ever worked with I consider to be a good developer.
OK the first 90% can have spots of enjoyment, like a nice gentle Sunday drive stopping off at Dairy Queen, but it's not normally what one would call "interesting".
Imustaskforhelp|1 month ago
Now, I do agree with you and this is why I feel like AI can be good at just prototyping or for internal use cases, want to try out something no idea, sure use it or I have a website which sucks and I can quickly spin up an alternative for person use case, go for it, maybe even publish it to web with open source.
Take feedback from people if they give any and run with it. So in essense, prototyping's pretty cool.
But whenever I wish to monetize or the idea of monetize, I feel like we can take some design ideas or experimentation and then just write them ourselves. My ideology is simple in that I don't want to pay for some service which was written by AI slop, I mean at that point, just share us the prompt.
So at this point, just rewrite the code and actually learn what you are talking about (like I will give an example, I recently prototyped some simple firecracker ssh thing using gliderlabs/ssh golang package, I don't know how the AI code works, its just I built for my own use case, but If I wish to ever (someday) try to monetize it in any sense, rest assured I will try to learn how gliderlabs/ssh works to its core and build it all by my hands)
TLDR: AI's good for prototyping but then once you got the idea/more ideas on top of it, try to rewrite it in your understanding because as others have said the AI code you won't understand and you would spend 99% time on that 1% which AI can't but at that point, why not just rewrite?
Also if you rewrite, I feel like most people will be chill then buying even Anti AI people. Like sure, use AI for prototypes but give me code which I can verify and you wrote/ you understand to its core with 100% pinning of this fact.
If you are really into software projects for sustainability, you are gonna anger a crowd for no reason & have nothing beneficial come out of it.
So I think kind of everybody knows this but still AI gets to production because sustainability isn't the concern.
This is the cause. sustainability just straight up isn't the concern.
if you have VC's which want you to add 100's of features or want you to use AI or have AI integration or something (something I don't think every company should or their creators should be interested in unless necessary) and those VC's are in it only for 3-5 years who might want to dump you or enshitten you short term for their own gains. I can see why sustainability stops being a concern and we get to where we are.
Or another group of people most interested are the startup entrepreneur hustle culture people who have a VC like culture as well where sustainability just doesn't matter
I do hope that I am not blanket naming these groups because sure some might be exceptions but I am just sharing how the incentives aren't aligned and how they would likely end up using AI 90% slop and that's what we end up seeing in evidence for most companies.
I do feel like we need to boost more companies who are in it for the long run/sustainable practices & people/indie businesses who are in it because they are passionate about some project (usually that happens when they face the problem themselves or curiosity in many cases), because we as consumers have an incentive stick as well. Hope some movement can spawn up which can capture this nuance because i am not anti AI completely but not exactly pro either
lazypenguin|1 month ago
fragmede|1 month ago
cwmoore|1 month ago
corysama|1 month ago
— Richard Guindon
This is certainly true of writing software.
That said, I am assuredly enjoying trying out artificial writing and research assistants.
bananaflag|1 month ago
Of course you can. The way the manager ignores the details when they ask the developer to do something, the same way they can when they ask the machine to do it.
utopiah|1 month ago
Yes, it has nothing to do with dev specifically, dev "just" happens to be how to do so while being text based, which is the medium of LLMs. What also "just" happens to be convenient is that dev is expensive, so if a new technology might help to make something possible and/or make it unexpensive, it's potentially a market.
Now pesky details like actual implementation, who got time for that, it's just few more trillions away.
ninjagoo|1 month ago
> The details are what stops it from working in every form it's been tried.
Since the author was speaking to business folk, I would argue that their dream is cheaper labor, or really just managing a line item in the summary budget. As evidenced by outsourcing efforts. I don't think they really care about how it happens - whether it is manifesting things into reality without having to get into the details, or just a cheaper human. It seems to me that the corporate fever around AI is simply the prospect of a "cheaper than human" opportunity.
Although, to your point, we must await AGI, or get very close to it, to be able to manifest things into reality without having to get into the details :-)
njhnjh|1 month ago
[deleted]
michaelsalim|1 month ago
brabel|1 month ago
While I agree with this, I think that it’s important to acknowledge that even if you did everything well and thought of everything in detail, you can still fail for reasons that are outside of your control. For example, a big company buying from your competitor who didn’t do a better job than you simply because they were mates with the people making the decision… that influences everyone else and they start, with good reason, to choose your competitor just because it’s now the “standard” solution, which itself has value and changes the picture for potential buyers.
In other words, being the best is not guarantee for success.
MarceliusK|1 month ago
SftwrSvior81|1 month ago
It's basically this:
"I'm hungry. I want to eat."
"Ok. What do you want?"
"I don't know. Read my mind and give me the food I will love."
dexdal|1 month ago
bodegajed|1 month ago
mattgreenrocks|1 month ago
They want to be seen as competent without the pound of flesh that mastery entails. But AI doesn’t level one’s internal playing field.
yomismoaqui|1 month ago
For 2 almost identical problems, having a little diference between them, the solutions can be radically different in complexity, price & time to deliver.
gonational|1 month ago
sanderjd|1 month ago
taneq|1 month ago
yoquan|1 month ago
kvirani|1 month ago
davnicwil|1 month ago
broast|1 month ago
hintymad|1 month ago
So it is not that details don't matter, but that now people can easily transfer certain know-how from other great minds. Unfortunately (or fortunately?), most people's jobs are learning and replicating know-hows from others.
drcxd|1 month ago
Now, if you want to use the dashboard do something else really brilliant, it is good enough for means. Just make sure the dashboard is not the end.
sublinear|1 month ago
Especially in web, boilerplate/starters/generators that do exactly what you want with little to no code or familiarity has been the norm for at least a decade. This is the lifeblood of repos like npm.
What we have is better search for all this code and documentation that was already freely available and ready to go.
onenite|1 month ago
bitwize|1 month ago
acron0|1 month ago
cudgy|1 month ago
mattgreenrocks|1 month ago
threethirtytwo|1 month ago
Speech recognition was a joke for half a century until it wasn’t. Machine translation was mocked for decades until it quietly became infrastructure. Autopilot existed forever before it crossed the threshold where it actually mattered. Voice assistants were novelty toys until they weren’t. At the same time, some technologies still haven’t crossed the line. Full self driving. General robotics. Fusion. History does not point one way. It fans out.
That is why invoking history as a veto is lazy. It is a crutch people reach for when it’s convenient. “This happened before, therefore that’s what’s happening now,” while conveniently ignoring that the opposite also happened many times. Either outcome is possible. History alone does not privilege the comforting one.
If you want to argue seriously, you have to start with ground truth. What is happening now. What the trendlines look like. What follows if those trendlines continue. Output per developer is rising. Time from idea to implementation is collapsing. Junior and mid level work is disappearing first. Teams are shipping with fewer people. These are not hypotheticals. The slope matters more than anecdotes. The relevant question is not whether this resembles CASE tools. It’s what the world looks like if this curve runs for five more years. The conclusion is not subtle.
The reason this argument keeps reappearing has little to do with tools and everything to do with identity. People do not merely program. They are programmers. “Software engineer” is a marker of intelligence, competence, and earned status. It is modern social rank. When that rank is threatened, the debate stops being about productivity and becomes about self preservation.
Once identity is on the line, logic degrades fast. Humans are not wired to update beliefs when status is threatened. They are wired to defend narratives. Evidence is filtered. Uncertainty is inflated selectively. Weak counterexamples are treated as decisive. Strong signals are waved away as hype. Arguments that sound empirical are adopted because they function as armor. “This happened before” is appealing precisely because it avoids engaging with present reality.
This is how self delusion works. People do not say “this scares me.” They say “it’s impossible.” They do not say “this threatens my role.” They say “the hard part is still understanding requirements.” They do not say “I don’t want this to be true.” They say “history proves it won’t happen.” Rationality becomes a costume worn by fear. Evolution optimized us for social survival, not for calmly accepting trendlines that imply loss of status.
That psychology leaks straight into the title. Calling this a “recurring dream” is projection. For developers, this is not a dream. It is a nightmare. And nightmares are easier to cope with if you pretend they belong to someone else. Reframe the threat as another person’s delusion, then congratulate yourself for being clear eyed. But the delusion runs the other way. The people insisting nothing fundamental is changing are the ones trying to sleep through the alarm.
The uncomfortable truth is that many people do not stand to benefit from this transition. Pretending otherwise does not make it false. Dismissing it as a dream does not make it disappear. If you want to engage honestly, you stop citing the past and start following the numbers. You accept where the trendlines lead, even when the destination is not one you want to visit.
djeastm|1 month ago
imiric|1 month ago
> If you want to argue seriously, you have to start with ground truth. What is happening now. What the trendlines look like. What follows if those trendlines continue.
Wait, so we can infer the future from "trendlines", but not from past events? Either past events are part of a macro trend, and are valuable data points, or the micro data points you choose to focus on are unreliable as well. Talk about selection bias...
I would argue that data points that are barely a few years old, and obscured by an unprecedented hype cycle and gold rush, are not reliable predictors of anything. The safe approach would be to wait for the market to settle, before placing any bets on the future.
> Time from idea to implementation is collapsing. Junior and mid level work is disappearing first. Teams are shipping with fewer people. These are not hypotheticals.
What is hypothetical is what will happen to all this software and the companies that produced it a few years down the line. How reliable is it? How maintainable is it? How many security issues does it have? What has the company lost because those issues were exploited? Will the same people who produced it using these new tools be able to troubleshoot and fix it? Will the tools get better to allow them to do that?
> The reason this argument keeps reappearing has little to do with tools and everything to do with identity.
Really? Everything? There is no chance that some people are simply pointing out the flaws of this technology, and that the marketing around it is making it out to be far more valuable than it actually is, so that a bunch of tech grifters can add more zeroes to their net worth?
I don't get how anyone can speak about trends and what's currently happening with any degree of confidence. Let alone dismiss the skeptics by making wild claims about their character. Do better.
habinero|1 month ago
My dude, I just want to point out that there is no evidence of any of this, and a lot of evidence of the opposite.
> If you want to engage honestly, you stop citing the past and start following the numbers. You accept where the trendlines lead, even
You first, lol.
> This is how self delusion works
Yeah, about that...
avcloudy|1 month ago
Another lesson history has taught us though, is that people don't defend narratives, they defend status. Not always successfully. They might not update beliefs, but they act effectively, decisively and sometimes brutally to protect status. You're making an evolutionary biology argument (which is always shady!) but people see loss of status as an existential threat, and they react with anger, not just denial.
cudgy|1 month ago
This seems extreme and obviously incorrect.
cuteeaglet|1 month ago
[deleted]