I've been coding now for about 30 years. I've never moved up into management and, now at almost 50, it's pretty clear I never will, and will spend the remainder of my career coding (which I'm perfectly fine with).
In the 30 years I've been coding, languages have come and gone, platforms have come and gone, frameworks have come and gone, methodologies have come and gone, but two things have never changed:
1) Software development is wildly unpredictable. I've never met anybody who could reliably predict how long something would take to develop, even if the requirements were "frozen" and even if they were the person delivering it. I have met people who could predict ahead of time how long it would take them to deliver something, but what they delivered always had "bugs" (either actual incorrect behavior, or edge cases that nobody thought of until they actually used the software) that themselves took an unpredictable amount of time to fix.
2) No organization can ever accept any amount of unpredictability. Every "advance" in software development I've ever seen has claimed to (implicitly or explicitly) remove the inherent unpredictability in software development, although none have actually delivered. They're fine with spending months in planning meetings while not actually developing anything tangible as long as those planning meetings provide the illusion of predictability.
Thank you for (1). I was trying to explain to someone the other day how unpredictable coding is, but you're right, it's wildly unpredictable. Even for tiny projects they sometimes go so far off the rails.
I remember working on this one major project - an online music store - and one of the execs hired his very lovely wife to project manage a team of developers. She would create these fantastic charts in Microsoft Project and every day try to somehow - somehow - get the developers to either (a) give her timescales for each component, or (b) explain why previous component X took twice as long as they had scheduled.
I've been writing code for 40 years and I still get caught out myself. I try to give realistic deadlines and then some fucking bug in the compiler or other such insanity will come along and make me look like a fool.
In my experience, one can accurately predict development time by correctly working around the planning fallacy.
The planning fallacy is something like "Humans are hopelessly optimistic. Even if you think you're planning for the worst case, unexpected things will happen, and your estimate is more like the best case scenario. This is true even if you try to compensate for the planning fallacy by being more pessimistic".
So you can't fix it by padding more weeks, trying to think of all the things that might happen, etc.
The only way to compensate for this is detailed in Kahneman's "Thinking Fast and Slow" (Must-read for many reasons). That is to start with prior evidence. Rather than making your estimate, start with "How long did it ACTUALLY take the last time someone did something like this?" and only then modifying the estimate for circumstances.
There is a huge gap between these two methods:
1. "How long does it take an average team to create a SAS product? Maybe a year?" Then modify for "but our team is extra smart, so maybe I'll shorten that to 10 months"
vs
2. "How long will this take? It's not a complicated product, and our team is smart: maybe 4 weeks for feature X, 6 for feature Y, 2 to deploy. I'll even throw in an extra 4 weeks of padding for unknowns, so 16 weeks".
Let me add more nuance here. We are always balancing a set of constraints: total cost, speed of delivery, certainty of cost, and certainty of delivery for a given date are but four.
Agile, for example, suggests that decreasing the cost of change at the expense of certainty of cost will increase the total chances of delivery within cost and date. Outsources are (implicitly) arguing that the decrease of total costs are worth the other friction in the organization.
This all goes back to return on investment. In the most simplistic case, if I have ten people working on this task now, and this software will allow me to reduce that to six, then that's four people I don't have to pay for that task. If those four people are making $100,000 a year, and the software lasts 10 years before having to be replaced, that's 4 million dollars. The software plus maintenance needs to cost less than 4 million dollars to make sense. We all know the simplistic case.
But if I can trade $100,000 dollars in guaranteed cost for a reduction of $500,000 in risk in making that goal, I can trade cost for certainty and that's going to be a good trade. Now, as developers we know that these numbers are often fudged, but everyone realizes this is inexact and just the best we all can do. We need to know how to think about these in terms of arguing against bad investments and steer toward good investments by knowing how the tradeoffs works.
Software does not have to be wildly unpredictable if you do good estimation work up front. Being good at this is very important if you work for clients that have a specific budget or are doing fixed bids for software. The main thing is that you have to take estimation very seriously. You have to put experienced people on the estimation task, and you have to spend a lot of time on it to get it right, but doing it right can pay huge dividends. A few strategies that have worked very well for me:
1. When doing estimates, estimate hierarchically and break down every task until the leaf nodes are 4 hours or less. If the leaf nodes are a week of effort, that means that you don't yet understand fully what the task entails. If you don't know what the task entails, it is very difficult to get an accurate estimate.
2. Maintain a database of various projects and features, and use them for comparison purposes. If you know how long something actually took in the past, you can much more accurately how long it will take to do something similar again.
3. Include specific time for various types of "overhead": meetings, code review, writing tests, debugging, making presentations, etc.
4. Derisk ahead of time things that are truly unpredictable. Doing something in a brand new programing language or framework, training an ML model on a new type of data, working on datasets of unknown quality, etc. For those you need to estimate out a project to determine feasibility and generate an estimate.
The main thing is that this is clearly not impossible. Years ago I got really interested in the Apollo program as an example of bringing in perhaps the most complex and risk-laden project ever attempted, on time. One of the things that clearly stands out about that project was that there was huge derisking happening first. At one level you had the Mercury and Gemini programs that were figuring out human spaceflight generally, and docking with Gemini, and also things like the first stage engines of the Saturn V were already in development in the mid fifties.
You can make some good guesses about levels of unpredictability based on previous experience though.
For example:
If the job involves dealing with anyone from Samsung or LG, it will take 2-3 times as long as you think.
If the job involves adding in a commercial third-party component which has a website about it, and that website doesn't clearly explain what the benefit of the thing actually is, the job will take 10-20 times as long as it should.
If the job involves a new feature Amazon or Google have just launched on one of their devices, and you are being CC'd on email chains involving representatives of Amazon or Google who are claiming implementation is very easy, the job will take a minimum of 6 months and may never be completed because the feature will be dropped or replaced with V2 before V1 ever actually works.
> You know what I haven’t seen? not once in 15 years?
> A company going under.
What a wild assertion: The OP hasn’t personally seen a company fail, and therefore software quality doesn’t matter? Bugs and slow delivery are fine?
It’s trivially easy to find counterexamples of companies failing because their software products were inferior to newcomers who delivered good results, fast development, and more stable experience. Startups fail all the time because their software isn’t good enough or isn’t delivered before the runway expires. The author is deliberately choosing to ignore this hard reality.
I think the author may have been swept up in big, slow companies that have so much money that they can afford to have terrible software development practices and massive internal bloat. Stay in this environment long enough and even the worst software development practices start to feel “normal” because you look around and nothing bad has happened yet.
For what it's worth, Mozilla was nearly killed at least twice by code quality.
Once upon startup. The code inherited from Netscape was... let's say charitably difficult to maintain. Turning it into something that could actually be improved upon and tested took years without any release of the Mozilla Suite.
Once when Chrome emerged. Not because the code was particularly bad, but because its architecture didn't reflect more modern requirements in terms of responsiveness or security. See https://yoric.github.io/post/why-did-mozilla-remove-xul-addo... for a few more details.
I was in a successful company that nearly died in 2017, where our entire production system corrupted itself due to a sneaking scale bug we had ported the system into. The problem was, the system with data had been running migrated for 3 months with the bug in it, so it was no longer possible to revert to the earlier working design. We were down for a week where no clients could run, and we spent the next 12 months purely digging ourselves out of that hole, with all new development paused, and all hands on limping the ship along. I would say, that bug was very close to ending us. Luckily, we never disappointed in a similar way since.
I think that it is (a little bit) more subtle: the importance of quality OF A PRODUCT (projects delivery is another beast) is relative to:
- the customer: either B2B or B2C
- the market share: minimal (< 1%... including all startups) or dominant (> 30%)
- B2C is really dynamic and few bad versions/products can make the customers fly away (except when strong dominance - like Windows - or no equivalent product) and shutdown a company. Price can be a strong factor and cost of migration/switching is usually not considered
- B2B is more conservative: hard to enter the market (so small market shares will need a lot of time to take off... if there's no competitor) but once you're in, the cost of change for a company is usually high enough to tolerate more bad versions (and more if there's few competitors, and incompatibilities between products, and legal requirements to keep records, and a lot of "configuration", and requiring a strong training for a lot of people...). Companies as customer dont see the switch of software as a technical problem (replacing on editor by another one) but as a management problem (training, cost to switch, data availability, availability of people already trained, cost of multi-year/multi-instance licences...)
> It’s trivially easy to find counterexamples of companies failing because their software products were inferior to newcomers who delivered good results, fast development, and more stable experience.
Is it? What I've seen is the opposite.
Businesses can be terrible top to bottom, slow, inefficient, and painful for customers, and still keep going for years and years. It's more about income/funding than product.
> I think the author may have been swept up in big, slow companies that have so much money that they can afford to . . .
That's what I'm talking about. They are legion! They could be companies that serve a niche that no one else does or with prohibitive switching costs (training is expensive). They could also be companies that somehow got enough market share that "no one gets fired for buying IBM."
Also, you know what those "big, slow companies" have in common? They are successful businesses. Unlike most startups.
> It’s trivially easy to find counterexamples of companies failing because their software products were inferior to newcomers who delivered good results, fast development, and more stable experience. Startups fail all the time because their software isn’t good enough or isn’t delivered before the runway expires. The author is deliberately choosing to ignore this hard reality.
While I personally haven't seen a company going under due to bad code, one can also definitely make the argument that software that is buggy or doesn't scale will lead to lost profits, which could also eventually become a problem, or present risks to business continuity.
I still recall working on critical performance issues for a near-real-time auction system way past the end of my working day, because there was an auction scheduled the next day and a fix was needed. I also recall visiting a healthcare business which could not provide services, because a system of theirs kept crashing and there was a long queue of people just sitting around and being miserable.
Whether private or public sector, poor code has a bad impact on many different things.
However, once can also definitely make the distinction between keeping the lights on (KTLO) and everything else. If an auction system cannot do auctions for some needed amount of users, that's a KTLO issue. If an e-commerce system calculates prices/taxes/discounts wrong and this leads to direct monetary losses, that's possibly a KTLO issue. If users occasionally get errors or weird widget sizing in some CRUD app or blog site, nobody really cares that much, at least as far as existential threats go.
Outside of that, bugs and slow delivery can be an unfortunate reality, yet one that can mostly be coped with.
For example, pg said that ViaWeb was successful because they had put care into their code, which allowed them to iterate quickly and integrate new features that customers requested. Whereas competitors were held back by their cumbersome code and slow cadence of releasing features.
Friendster is memorable example. It did not scale well, and fed up users flocked to MySpace and Facebook, which came later.
More generally it depends on how competitive the space the product operates in, whether quality is something the buyer values and is able to evaluate! Enterprises for example infamously don't appreciate quality as much consumers because the economic buyer does not use the product.
- take a blank piece of paper (this is “the software”)
- pick two random points on the paper roughly 3 inches apart (“a requirement”)
- draw a line between the two points; the line cannot cross any other line (“the implementation”).
Repeat the exercise multiple times.
You’ll quickly learn that unless you have a “system”, drawing a three inch line goes from taking about 2 seconds, to taking 10, 15 or 30 seconds, despite “the requirement complexity” being exactly the same every time.
Now try playing with people taking turns. :)
I like playing this game with people who think that writing bad code is fine, or that they can work by themselves and not worry about what other people do as long as “their own” code is good.
You can still solve pretty much any problem with enough time and effort; but if you don’t have a “system” for good organisation, eventually, you’ll be struggling to solve basic problems because of layered unmanageable requirements, no matter how smart or capable you are.
…it’s not about shipping bugs, it’s about fundamentally reducing the speed of delivery over time, incrementally, in an unbounded fashion that eventually makes systems impossible to make changes to without compromising on requirements.
(Obviously the game is contrived, but it works very well in explaining why past requirements some times have to go away to business people in my experience).
What "system" would you use for random "requirements" (dots on a piece of paper)? The only reasonable thing to do is to push back on "requirements" that don't fit into the existing requirements/software/implementation.
This blog post is a response to an advertisement for coaching services. The article it responds to was so content-free and sales-focused that it actually got flagged here on HN.
If I scribble a random time stamp on a piece of paper and frame it and call it a clock, it's still right at least once a day (24 hour notation or bust :) ) The original article was just a garbage native advertisement with the wrong conclusion, I agree, but the ideas it stated weren't really wrong; Microsoft is a pretty good example of a company that constantly leaves its products in very buggy states for long periods of time, and it expects you to use the products in that state. (read: pay Microsoft) They're still dominating. While I admire the ReFS team for what they pulled off, the first 2 years it was virtually unusable if you tried to use it like they told you to, either due to memory leak bugs or outright corrupting your data silently.
It still got used and was one of the common deployments for MS SQL and Exchange during that time.
The original article doesn't sell a solution, it sells some spin from the author, but the original author wasn't wrong that there is a ridiculously high tolerance for buggy, low quality software, and the reason isn't super clear. These aren't cheap services either, they're billion dollar companies that can barely keep the services running sometimes or are incredibly slow to react to missing features or features not working well/correctly.
The problem is that anyone with a semblance of professional pride will find it mentally quite difficult to knowingly ship bad software.
Like, sure, even if we make best efforts to catch both obvious and less obvious bugs, all software we ship will still be full of bugs. But knowingly shipping software full of obvious bugs... it feels unprofessional. And makes you really feel the weight of all the inevitable bug reports that come in.
The alternative often becomes to warn about it, have the organization respond (whether explicitly or otherwise) "we don't care", and then, to preserve your sanity, say to yourself "well, then all bets are off, and I'm not responsible" which also isn't healthy, because it leaves you jaded, disengaged and robs you of your sense of professionalism.
I don't have a solution here lol other than find an org that cares about quality to an acceptable degree.
Perhaps one solution is to divorce your own self worth from the quality of the software you happen to write and to realize that you are an employee first and a software engineer second - make decisions that increase the amount of money the company makes and look for self worth in that itself or in hobbies/family/friends/something else.
You are not professional if you are not aligning with the values that are being optimized for the context. There should be an explicit decision tree setup for your work context that informs you what those values are. If not have some conversations and align.
yeah. and not just "not professional". It's basically giving the finger to a whole bunch of (generally) lower-paid support staff answering phone calls from irate customers.
This is true... but it's also true of ≈everything else in the organization. An organization that could be broken by any single thing would not survive for long! This includes the things people tediously insist to be "more important" than software quality. It certainly includes the usual suspects that push people to cut software corners: companies don't break by missing OKRs or slipping schedules or not "focusing" or any other management pablum any more than they break due to software quality.
But, of course, all of those can contribute to a company failing—including bugs, slow delivery and poor quality code. The dynamics and the extent to which different factors matter is inherently context-specific.
Companies can fail despite doing the right thing and succeed despite doing the wrong thing. This doesn't make the wrong thing effective or reasonable, it just means we're operating in the domain of complex, stochastic systems, so our simple mental models of cause and effect become misleading.
The fact that low-quality software doesn't directly cause a company to fail does not imply that investing in software quality is not right. I am firmly convinced that maintaining high quality software will have better outcomes in expectation than cutting corners pretty much universally, if only because it simply doesn't cost much—it's a matter of setting the right culture more than anything else.
There are interesting observations to be made here. Variations on "software quality doesn't matter because it won't destroy the company" are not it.
There are many blog posts like this one that deny (often deride) the value of software craftmanship, including the skillset of planning.
They almost always have this same conclusiveness to them as well ("she was right"), with no tangible backing other than personal anecdata ("everyone sucked who I worked with, so everyone must suck").
If you zoom out, the reality is that people in the software industry are spoiled AF. A majority of practitioners get paid a ridiculous amount and get away with murder.
It is true that current market dynamics give people in this industry pretty much zero incentive to get good at the craft. So people don't and then to rhetorically justify their philistinism, they write blog posts like this.
It is also true that many businesses get away with writing absolutely shitty software and stay in business because of these dynamics. But that doesn't make it "ok" and doesn't say anything about the enormous amount lost to such apathy.
I don't think the blog post derides the value. Rather, it points out that, while fixing bugs is valuable to the dev (professional self-respect shipping quality products) and the user (obvious), the way software market works in practice is that it's not particularly valuable to the company - at least, not valuable enough to justify the expense. Speaking as someone with 20 years of software engineering experience, I think this is an accurate assessment of things. But at the same time you're absolutely correct that it doesn't make it okay, and it is actually detrimental to both users and developers for the aforementioned reasons.
But, well, this is capitalism for you - whatever is the most profitable thing to do is what gets done regardless of how not okay it really is. I think the question we should be asking is: if users hate using crappy buggy software, and devs hate writing it, but it still gets written, aren't our basic economic incentives clearly out of tune?
The core of this argument is really about the emotional health of the workers.
An emotionally healthy person tends to want to maintain their environment. It doesn't do anything objective to have nice furniture and pretty paintings, but you like it in your house. You want to feel at peace, so you adjust little things...the kitchen would be nicer with the plates in a pile over here instead of just pulling them out of the dishwasher.
At work, you're the householder of a million technical details, and to feel good about it you want to maintain your environment! If that window has an ugly redraw you want to recode it, if the ribbon has big ugly icons you want to collapse it into a strip menu, if the email client freezes whenever WiFi gets spotty you get annoyed because the code is still from 1995 under the surface. The software you're making and using, in service to your own craft, has an effect on you. If it makes you feel ugly, you want to fix it.
This blog post -- which probably wasn't written on company time -- is here because someone needed to reorganize their thinking to feel emotional health. I think we should insist on doing a better job than just sacrificing our work to feed the money machine, even if our self-described jobs are just to turn funding into paychecks.
Precisely. I get not going overboard and being a perfectionist. One has to ship. But also, you should have SOME pride in your work. You should want to not make total shit.
Both this article and the one it references only seem to provide more evidence that the descent into mediocrity of the software industry is certainly happening. That said, there's a difference between actual quality, and "quality" as popularised by metrics-driven dogma.
Example for the last point: people seem to have forgotten how much of a "quality" shitshow was Twitter. The thing was written in Ruby on Rails, downtime was normal and expected by everyone. Yet the platform thrived and the username grew. Or Facebook, the page didn't fully load like 30% of the time, meaning that at least one panel was broken, some picture was missing, etc.
Maybe, just maybe, the best people to build complex software systems that serve millions of people aren't the ones who can do the most leetcode problems.
This is how I feel about most "things I hate". Despite my strong feelings towards them, there's a reason why they are the way that they are. Almost always it comes down to my interests & priorities not aligning with reality/everyone else.
And even though the points raised are mostly valid I think there is a lot of nuance to this.
If you write control software for medical devices this is simply not true. Same goes if you handle incredibly sensitive data or critical infrastructure or assistance systems for air travel and you can probably come up with a a bunch more cases where bugs are not ok.
And also I think this sentence
> Where high quality is nice to have, but is not the be-all-and-end-all.
while being mostly true in all other cases I don't think it does it justice what mediocrity actually means for everyone involved. I have seen very varying levels of quality requirements/enforcement, testing, delivery speed and eventual bugs at different costs at different companies.
And I in general found that higher quality software and less bugs were associated with much happier and more productive developers. And I would make the argument that less happy developers => more turnover => more second order costs. So while it might not make or break the company it certainly has a negative impact on profits downstream. And for customers too. Vendor lock in is a great thing for a company to profit from but if you really make a shitty product it opens up a lot of venues for competitors to eventually canibalize your market or from users to jump ship the next time they can. Take MS Teams for example. Lots of lock in but trust me the second better competitors are on the table I vigorously fight to switch. It's a slow burn but a burn nonetheless.
The examples you give have regulations for software. Even if companies want to "move fast and break things," with medical devices there are required tests and the testing and results are audited by the feds (here in the US), so you're forced to move slowly and not be broken (in ways that are regulated).
That does not mean "high quality." You can still have an architecture that's tightly-coupled spaghetti code that resists all human efforts at debugging. It just means "we threw enough paperwork at the auditors that they're not asking for more."
You'd be surprised how many businesses are run by completely incompetent people and still manage to survive.
A relative of mine who was an accountant for a small manufacturing plant and the systematic problems from top to bottom were shocking. They worked there for well over a decade and also managed a lot of other business aspects. Nothing got shipped, billed, ordered, or paid without them knowing. The company always had just enough income to keep going. Somehow, it was sustainable without any hidden external sources of money.
Yesterday somebody shared this with me and it made me reflect about what things could we learn from other engineering disciplines. It might be just me but I have this gnawing impression that a lot of the pains we have in software are self imposed.
As someone who works for an organization that is on the receiving end of bad software, I can assure you this is not the case. There is lost opportunity, due to decreased productivity and outright lost clients. There is a souring of the work environment, an environment where people are constantly putting out fires and there is constant churn in staff because of that.
Perhaps the organization I work for is unique in that it is public sector and cannot go out of business (which appears to be the author's definition of broken). Perhaps the author has never dealt with software where edge cases involving lost and phantom data is a daily norm. Clients, real people with real lives, are literally lost by the system.
Things are so bad that losing several years of business records (or, more likely, paying to have them maintained by a retired system for some legally defined period) is considered as a viable option.
It really, really depends on the use-case. You certainly don't want bugs in your MMU, process scheduler, sensors, security libraries, etc.
But a piece of line-of-business software that sends out the wrong email once because someone made a bug in a batch script? Sure. Not the end of the world.
Users will tolerate some level of errors. Even if that level is sometimes expected to be 0.
I disagree with this conclusion because it's taking as implicit the assumption that the organization is the thing to be victorious, which is a misunderstanding of organizational politics.
There is no organization. There is only people. Some of those people, like your boss or your investors, may want you to burn yourself out or do unethical things to your customers so they can make more money. It's your prerogative to tell them to fuck off, and go work somewhere else if you don't get your way. Some people don't have that capacity. Software developers generally do.
Perverse organizational incentives will pressure you to be bad. Refuse.
"Software quality" is such ambigously defined term that all of these can simultaneously be true (for different products):
* software quality is (more) important
* software quality is not (very) important
* bugs are not important (aka "i don't test, but when I do, it's in production")
* the less bugs you have the better, but they're not gonna kill you
* bugs are safety-critical
* slow delivery is fine
* slow delivery is not fine
In the last 20 or so years working as software developer professionaly, in various teams and quite different projects, I've noticed that:
* people (decision makers, software buyers/clients) always say they want higher quality
* same people very rarely want to back that up with their budget
* people almost always prefer more features to more quality (even when explicitly told that more feature is not a good thing for eg. MVP stage of a startup)
* most people except dedicated QA engineers can only think of a happy path (if everything works the ideal way); this includes not only other engineers (who try to think up failure scenarios but often miss important ones), but also designers (when have you ever seen a design mockup for "DNS lookup failure connecting to the server" case?), and definitely product owners or clients (in context of a dev agency)
* I have never worked on a nontrivial software project that didn't change requirements/scope
Myself, as with many other software developers who chose this field out of passion and not stratospheric wages, was for a long time appalled by this apparent lax approach to quality? Don't you want to have the best possible product?
But working with a lot of diverse clients while doing dev agency work, and running several startups myself, has taught me that software quality is not, and (it pains me to write this) should not be, the top priority. Product-market fit, treating your customers well, having a sustainable monetization strategy, marketing and sales ... if you don't execute well on those, nobody will notice the (lack of) quality[0].
In most organizations (except the ones that are swimming in cash) it's a tough act to balance all of the above. While I wish for all the projects I work on to be the best they can be, a programming work of art if you will, I can certainly empathize for people who must prioritize otherwise.
[0] unless the quality is safety critical, in case I need to point this out explicitly.
Ultimately it all comes down to business case. If there isn't one to invest in CI, developer time to build a test suite with 100% coverage, and then ongoing upkeep of that, then it's not going to happen.
We are too often religious about these things, and may not have visibility within the business to see why there isn't a case for it.
Having said that it can also be very hard to convince management of the business case. That may have had a bad experience of these tool and practices before and see them as a waste. Thay may also, in that specific case, be right.
Maybe this is my years of experience talking, but curious if anyone else sees a connection between a slow delivery (aka slower development time) and less bugs? The idea is that developers have more time to think about the solution, write better code and overall not feel so rushed just to get something out the door. In my experience, the more pressure put on developers to get the thing done, the more bugs are introduced.
[+] [-] commandlinefan|2 years ago|reply
In the 30 years I've been coding, languages have come and gone, platforms have come and gone, frameworks have come and gone, methodologies have come and gone, but two things have never changed:
1) Software development is wildly unpredictable. I've never met anybody who could reliably predict how long something would take to develop, even if the requirements were "frozen" and even if they were the person delivering it. I have met people who could predict ahead of time how long it would take them to deliver something, but what they delivered always had "bugs" (either actual incorrect behavior, or edge cases that nobody thought of until they actually used the software) that themselves took an unpredictable amount of time to fix.
2) No organization can ever accept any amount of unpredictability. Every "advance" in software development I've ever seen has claimed to (implicitly or explicitly) remove the inherent unpredictability in software development, although none have actually delivered. They're fine with spending months in planning meetings while not actually developing anything tangible as long as those planning meetings provide the illusion of predictability.
[+] [-] qingcharles|2 years ago|reply
I remember working on this one major project - an online music store - and one of the execs hired his very lovely wife to project manage a team of developers. She would create these fantastic charts in Microsoft Project and every day try to somehow - somehow - get the developers to either (a) give her timescales for each component, or (b) explain why previous component X took twice as long as they had scheduled.
I've been writing code for 40 years and I still get caught out myself. I try to give realistic deadlines and then some fucking bug in the compiler or other such insanity will come along and make me look like a fool.
[+] [-] embwbam|2 years ago|reply
The planning fallacy is something like "Humans are hopelessly optimistic. Even if you think you're planning for the worst case, unexpected things will happen, and your estimate is more like the best case scenario. This is true even if you try to compensate for the planning fallacy by being more pessimistic".
So you can't fix it by padding more weeks, trying to think of all the things that might happen, etc.
The only way to compensate for this is detailed in Kahneman's "Thinking Fast and Slow" (Must-read for many reasons). That is to start with prior evidence. Rather than making your estimate, start with "How long did it ACTUALLY take the last time someone did something like this?" and only then modifying the estimate for circumstances.
There is a huge gap between these two methods:
1. "How long does it take an average team to create a SAS product? Maybe a year?" Then modify for "but our team is extra smart, so maybe I'll shorten that to 10 months"
vs
2. "How long will this take? It's not a complicated product, and our team is smart: maybe 4 weeks for feature X, 6 for feature Y, 2 to deploy. I'll even throw in an extra 4 weeks of padding for unknowns, so 16 weeks".
[+] [-] ebiester|2 years ago|reply
Agile, for example, suggests that decreasing the cost of change at the expense of certainty of cost will increase the total chances of delivery within cost and date. Outsources are (implicitly) arguing that the decrease of total costs are worth the other friction in the organization.
This all goes back to return on investment. In the most simplistic case, if I have ten people working on this task now, and this software will allow me to reduce that to six, then that's four people I don't have to pay for that task. If those four people are making $100,000 a year, and the software lasts 10 years before having to be replaced, that's 4 million dollars. The software plus maintenance needs to cost less than 4 million dollars to make sense. We all know the simplistic case.
But if I can trade $100,000 dollars in guaranteed cost for a reduction of $500,000 in risk in making that goal, I can trade cost for certainty and that's going to be a good trade. Now, as developers we know that these numbers are often fudged, but everyone realizes this is inexact and just the best we all can do. We need to know how to think about these in terms of arguing against bad investments and steer toward good investments by knowing how the tradeoffs works.
[+] [-] cameldrv|2 years ago|reply
1. When doing estimates, estimate hierarchically and break down every task until the leaf nodes are 4 hours or less. If the leaf nodes are a week of effort, that means that you don't yet understand fully what the task entails. If you don't know what the task entails, it is very difficult to get an accurate estimate.
2. Maintain a database of various projects and features, and use them for comparison purposes. If you know how long something actually took in the past, you can much more accurately how long it will take to do something similar again.
3. Include specific time for various types of "overhead": meetings, code review, writing tests, debugging, making presentations, etc.
4. Derisk ahead of time things that are truly unpredictable. Doing something in a brand new programing language or framework, training an ML model on a new type of data, working on datasets of unknown quality, etc. For those you need to estimate out a project to determine feasibility and generate an estimate.
The main thing is that this is clearly not impossible. Years ago I got really interested in the Apollo program as an example of bringing in perhaps the most complex and risk-laden project ever attempted, on time. One of the things that clearly stands out about that project was that there was huge derisking happening first. At one level you had the Mercury and Gemini programs that were figuring out human spaceflight generally, and docking with Gemini, and also things like the first stage engines of the Saturn V were already in development in the mid fifties.
[+] [-] sanitycheck|2 years ago|reply
For example:
If the job involves dealing with anyone from Samsung or LG, it will take 2-3 times as long as you think.
If the job involves adding in a commercial third-party component which has a website about it, and that website doesn't clearly explain what the benefit of the thing actually is, the job will take 10-20 times as long as it should.
If the job involves a new feature Amazon or Google have just launched on one of their devices, and you are being CC'd on email chains involving representatives of Amazon or Google who are claiming implementation is very easy, the job will take a minimum of 6 months and may never be completed because the feature will be dropped or replaced with V2 before V1 ever actually works.
[+] [-] PragmaticPulp|2 years ago|reply
> A company going under.
What a wild assertion: The OP hasn’t personally seen a company fail, and therefore software quality doesn’t matter? Bugs and slow delivery are fine?
It’s trivially easy to find counterexamples of companies failing because their software products were inferior to newcomers who delivered good results, fast development, and more stable experience. Startups fail all the time because their software isn’t good enough or isn’t delivered before the runway expires. The author is deliberately choosing to ignore this hard reality.
I think the author may have been swept up in big, slow companies that have so much money that they can afford to have terrible software development practices and massive internal bloat. Stay in this environment long enough and even the worst software development practices start to feel “normal” because you look around and nothing bad has happened yet.
[+] [-] Yoric|2 years ago|reply
Once upon startup. The code inherited from Netscape was... let's say charitably difficult to maintain. Turning it into something that could actually be improved upon and tested took years without any release of the Mozilla Suite.
Once when Chrome emerged. Not because the code was particularly bad, but because its architecture didn't reflect more modern requirements in terms of responsiveness or security. See https://yoric.github.io/post/why-did-mozilla-remove-xul-addo... for a few more details.
[+] [-] fifticon|2 years ago|reply
[+] [-] ryandrake|2 years ago|reply
This blog post is saying “Staying healthy doesn’t matter because neither I nor anyone I know died so far.”
[+] [-] olivierduval|2 years ago|reply
- the customer: either B2B or B2C
- the market share: minimal (< 1%... including all startups) or dominant (> 30%)
- B2C is really dynamic and few bad versions/products can make the customers fly away (except when strong dominance - like Windows - or no equivalent product) and shutdown a company. Price can be a strong factor and cost of migration/switching is usually not considered
- B2B is more conservative: hard to enter the market (so small market shares will need a lot of time to take off... if there's no competitor) but once you're in, the cost of change for a company is usually high enough to tolerate more bad versions (and more if there's few competitors, and incompatibilities between products, and legal requirements to keep records, and a lot of "configuration", and requiring a strong training for a lot of people...). Companies as customer dont see the switch of software as a technical problem (replacing on editor by another one) but as a management problem (training, cost to switch, data availability, availability of people already trained, cost of multi-year/multi-instance licences...)
[+] [-] drewcoo|2 years ago|reply
Is it? What I've seen is the opposite.
Businesses can be terrible top to bottom, slow, inefficient, and painful for customers, and still keep going for years and years. It's more about income/funding than product.
> I think the author may have been swept up in big, slow companies that have so much money that they can afford to . . .
That's what I'm talking about. They are legion! They could be companies that serve a niche that no one else does or with prohibitive switching costs (training is expensive). They could also be companies that somehow got enough market share that "no one gets fired for buying IBM."
Also, you know what those "big, slow companies" have in common? They are successful businesses. Unlike most startups.
[+] [-] KronisLV|2 years ago|reply
While I personally haven't seen a company going under due to bad code, one can also definitely make the argument that software that is buggy or doesn't scale will lead to lost profits, which could also eventually become a problem, or present risks to business continuity.
I still recall working on critical performance issues for a near-real-time auction system way past the end of my working day, because there was an auction scheduled the next day and a fix was needed. I also recall visiting a healthcare business which could not provide services, because a system of theirs kept crashing and there was a long queue of people just sitting around and being miserable.
Whether private or public sector, poor code has a bad impact on many different things.
However, once can also definitely make the distinction between keeping the lights on (KTLO) and everything else. If an auction system cannot do auctions for some needed amount of users, that's a KTLO issue. If an e-commerce system calculates prices/taxes/discounts wrong and this leads to direct monetary losses, that's possibly a KTLO issue. If users occasionally get errors or weird widget sizing in some CRUD app or blog site, nobody really cares that much, at least as far as existential threats go.
Outside of that, bugs and slow delivery can be an unfortunate reality, yet one that can mostly be coped with.
[+] [-] codetrotter|2 years ago|reply
For example, pg said that ViaWeb was successful because they had put care into their code, which allowed them to iterate quickly and integrate new features that customers requested. Whereas competitors were held back by their cumbersome code and slow cadence of releasing features.
[+] [-] esafak|2 years ago|reply
More generally it depends on how competitive the space the product operates in, whether quality is something the buyer values and is able to evaluate! Enterprises for example infamously don't appreciate quality as much consumers because the economic buyer does not use the product.
[+] [-] the_gipsy|2 years ago|reply
[+] [-] wokwokwok|2 years ago|reply
- take a blank piece of paper (this is “the software”)
- pick two random points on the paper roughly 3 inches apart (“a requirement”)
- draw a line between the two points; the line cannot cross any other line (“the implementation”).
Repeat the exercise multiple times.
You’ll quickly learn that unless you have a “system”, drawing a three inch line goes from taking about 2 seconds, to taking 10, 15 or 30 seconds, despite “the requirement complexity” being exactly the same every time.
Now try playing with people taking turns. :)
I like playing this game with people who think that writing bad code is fine, or that they can work by themselves and not worry about what other people do as long as “their own” code is good.
You can still solve pretty much any problem with enough time and effort; but if you don’t have a “system” for good organisation, eventually, you’ll be struggling to solve basic problems because of layered unmanageable requirements, no matter how smart or capable you are.
…it’s not about shipping bugs, it’s about fundamentally reducing the speed of delivery over time, incrementally, in an unbounded fashion that eventually makes systems impossible to make changes to without compromising on requirements.
(Obviously the game is contrived, but it works very well in explaining why past requirements some times have to go away to business people in my experience).
[+] [-] idbehold|2 years ago|reply
[+] [-] newaccount74|2 years ago|reply
Speed only goes to zero if you insist on bug free software.
[+] [-] aaronbrethorst|2 years ago|reply
https://news.ycombinator.com/item?id=36615325
I don’t think anyone needs to waste their time arguing with a strawman.
[+] [-] csydas|2 years ago|reply
It still got used and was one of the common deployments for MS SQL and Exchange during that time.
The original article doesn't sell a solution, it sells some spin from the author, but the original author wasn't wrong that there is a ridiculously high tolerance for buggy, low quality software, and the reason isn't super clear. These aren't cheap services either, they're billion dollar companies that can barely keep the services running sometimes or are incredibly slow to react to missing features or features not working well/correctly.
I think this situation merits discussion.
[+] [-] bedobi|2 years ago|reply
Like, sure, even if we make best efforts to catch both obvious and less obvious bugs, all software we ship will still be full of bugs. But knowingly shipping software full of obvious bugs... it feels unprofessional. And makes you really feel the weight of all the inevitable bug reports that come in.
The alternative often becomes to warn about it, have the organization respond (whether explicitly or otherwise) "we don't care", and then, to preserve your sanity, say to yourself "well, then all bets are off, and I'm not responsible" which also isn't healthy, because it leaves you jaded, disengaged and robs you of your sense of professionalism.
I don't have a solution here lol other than find an org that cares about quality to an acceptable degree.
[+] [-] nickelbob|2 years ago|reply
[+] [-] tru1ock|2 years ago|reply
[+] [-] jonnycomputer|2 years ago|reply
[+] [-] tikhonj|2 years ago|reply
This is true... but it's also true of ≈everything else in the organization. An organization that could be broken by any single thing would not survive for long! This includes the things people tediously insist to be "more important" than software quality. It certainly includes the usual suspects that push people to cut software corners: companies don't break by missing OKRs or slipping schedules or not "focusing" or any other management pablum any more than they break due to software quality.
But, of course, all of those can contribute to a company failing—including bugs, slow delivery and poor quality code. The dynamics and the extent to which different factors matter is inherently context-specific.
Companies can fail despite doing the right thing and succeed despite doing the wrong thing. This doesn't make the wrong thing effective or reasonable, it just means we're operating in the domain of complex, stochastic systems, so our simple mental models of cause and effect become misleading.
The fact that low-quality software doesn't directly cause a company to fail does not imply that investing in software quality is not right. I am firmly convinced that maintaining high quality software will have better outcomes in expectation than cutting corners pretty much universally, if only because it simply doesn't cost much—it's a matter of setting the right culture more than anything else.
There are interesting observations to be made here. Variations on "software quality doesn't matter because it won't destroy the company" are not it.
[+] [-] nickelbob|2 years ago|reply
Except for sales?
[+] [-] wellpast|2 years ago|reply
There are many blog posts like this one that deny (often deride) the value of software craftmanship, including the skillset of planning.
They almost always have this same conclusiveness to them as well ("she was right"), with no tangible backing other than personal anecdata ("everyone sucked who I worked with, so everyone must suck").
If you zoom out, the reality is that people in the software industry are spoiled AF. A majority of practitioners get paid a ridiculous amount and get away with murder.
It is true that current market dynamics give people in this industry pretty much zero incentive to get good at the craft. So people don't and then to rhetorically justify their philistinism, they write blog posts like this.
It is also true that many businesses get away with writing absolutely shitty software and stay in business because of these dynamics. But that doesn't make it "ok" and doesn't say anything about the enormous amount lost to such apathy.
[+] [-] int_19h|2 years ago|reply
But, well, this is capitalism for you - whatever is the most profitable thing to do is what gets done regardless of how not okay it really is. I think the question we should be asking is: if users hate using crappy buggy software, and devs hate writing it, but it still gets written, aren't our basic economic incentives clearly out of tune?
[+] [-] hyperhello|2 years ago|reply
An emotionally healthy person tends to want to maintain their environment. It doesn't do anything objective to have nice furniture and pretty paintings, but you like it in your house. You want to feel at peace, so you adjust little things...the kitchen would be nicer with the plates in a pile over here instead of just pulling them out of the dishwasher.
At work, you're the householder of a million technical details, and to feel good about it you want to maintain your environment! If that window has an ugly redraw you want to recode it, if the ribbon has big ugly icons you want to collapse it into a strip menu, if the email client freezes whenever WiFi gets spotty you get annoyed because the code is still from 1995 under the surface. The software you're making and using, in service to your own craft, has an effect on you. If it makes you feel ugly, you want to fix it. This blog post -- which probably wasn't written on company time -- is here because someone needed to reorganize their thinking to feel emotional health. I think we should insist on doing a better job than just sacrificing our work to feed the money machine, even if our self-described jobs are just to turn funding into paychecks.
[+] [-] gaze|2 years ago|reply
[+] [-] userbinator|2 years ago|reply
[+] [-] H8crilA|2 years ago|reply
[+] [-] contravariant|2 years ago|reply
Which, interestingly, wasn't something a software engineer came up with.
[+] [-] distcs|2 years ago|reply
[+] [-] adamsmith143|2 years ago|reply
[+] [-] DougBTX|2 years ago|reply
[+] [-] von_lohengramm|2 years ago|reply
[+] [-] Escapado|2 years ago|reply
> Where high quality is nice to have, but is not the be-all-and-end-all.
while being mostly true in all other cases I don't think it does it justice what mediocrity actually means for everyone involved. I have seen very varying levels of quality requirements/enforcement, testing, delivery speed and eventual bugs at different costs at different companies. And I in general found that higher quality software and less bugs were associated with much happier and more productive developers. And I would make the argument that less happy developers => more turnover => more second order costs. So while it might not make or break the company it certainly has a negative impact on profits downstream. And for customers too. Vendor lock in is a great thing for a company to profit from but if you really make a shitty product it opens up a lot of venues for competitors to eventually canibalize your market or from users to jump ship the next time they can. Take MS Teams for example. Lots of lock in but trust me the second better competitors are on the table I vigorously fight to switch. It's a slow burn but a burn nonetheless.
[+] [-] jonnycomputer|2 years ago|reply
https://en.wikipedia.org/wiki/Therac-25
[+] [-] drewcoo|2 years ago|reply
That does not mean "high quality." You can still have an architecture that's tightly-coupled spaghetti code that resists all human efforts at debugging. It just means "we threw enough paperwork at the auditors that they're not asking for more."
[+] [-] kayodelycaon|2 years ago|reply
A relative of mine who was an accountant for a small manufacturing plant and the systematic problems from top to bottom were shocking. They worked there for well over a decade and also managed a lot of other business aspects. Nothing got shipped, billed, ordered, or paid without them knowing. The company always had just enough income to keep going. Somehow, it was sustainable without any hidden external sources of money.
[+] [-] _benj|2 years ago|reply
Yesterday somebody shared this with me and it made me reflect about what things could we learn from other engineering disciplines. It might be just me but I have this gnawing impression that a lot of the pains we have in software are self imposed.
https://web.cs.wpi.edu/~gogo/humor/hum_toast.html
[+] [-] II2II|2 years ago|reply
As someone who works for an organization that is on the receiving end of bad software, I can assure you this is not the case. There is lost opportunity, due to decreased productivity and outright lost clients. There is a souring of the work environment, an environment where people are constantly putting out fires and there is constant churn in staff because of that.
Perhaps the organization I work for is unique in that it is public sector and cannot go out of business (which appears to be the author's definition of broken). Perhaps the author has never dealt with software where edge cases involving lost and phantom data is a daily norm. Clients, real people with real lives, are literally lost by the system.
Things are so bad that losing several years of business records (or, more likely, paying to have them maintained by a retired system for some legally defined period) is considered as a viable option.
So yes, buggy software does break organizations.
[+] [-] agentultra|2 years ago|reply
But a piece of line-of-business software that sends out the wrong email once because someone made a bug in a batch script? Sure. Not the end of the world.
Users will tolerate some level of errors. Even if that level is sometimes expected to be 0.
[+] [-] AnthonyMouse|2 years ago|reply
There is no organization. There is only people. Some of those people, like your boss or your investors, may want you to burn yourself out or do unethical things to your customers so they can make more money. It's your prerogative to tell them to fuck off, and go work somewhere else if you don't get your way. Some people don't have that capacity. Software developers generally do.
Perverse organizational incentives will pressure you to be bad. Refuse.
[+] [-] senko|2 years ago|reply
* software quality is (more) important
* software quality is not (very) important
* bugs are not important (aka "i don't test, but when I do, it's in production")
* the less bugs you have the better, but they're not gonna kill you
* bugs are safety-critical
* slow delivery is fine
* slow delivery is not fine
In the last 20 or so years working as software developer professionaly, in various teams and quite different projects, I've noticed that:
* people (decision makers, software buyers/clients) always say they want higher quality
* same people very rarely want to back that up with their budget
* people almost always prefer more features to more quality (even when explicitly told that more feature is not a good thing for eg. MVP stage of a startup)
* most people except dedicated QA engineers can only think of a happy path (if everything works the ideal way); this includes not only other engineers (who try to think up failure scenarios but often miss important ones), but also designers (when have you ever seen a design mockup for "DNS lookup failure connecting to the server" case?), and definitely product owners or clients (in context of a dev agency)
* I have never worked on a nontrivial software project that didn't change requirements/scope
Myself, as with many other software developers who chose this field out of passion and not stratospheric wages, was for a long time appalled by this apparent lax approach to quality? Don't you want to have the best possible product?
But working with a lot of diverse clients while doing dev agency work, and running several startups myself, has taught me that software quality is not, and (it pains me to write this) should not be, the top priority. Product-market fit, treating your customers well, having a sustainable monetization strategy, marketing and sales ... if you don't execute well on those, nobody will notice the (lack of) quality[0].
In most organizations (except the ones that are swimming in cash) it's a tough act to balance all of the above. While I wish for all the projects I work on to be the best they can be, a programming work of art if you will, I can certainly empathize for people who must prioritize otherwise.
[0] unless the quality is safety critical, in case I need to point this out explicitly.
[+] [-] samwillis|2 years ago|reply
We are too often religious about these things, and may not have visibility within the business to see why there isn't a case for it.
Having said that it can also be very hard to convince management of the business case. That may have had a bad experience of these tool and practices before and see them as a waste. Thay may also, in that specific case, be right.
[+] [-] unknown|2 years ago|reply
[deleted]
[+] [-] clintmcmahon|2 years ago|reply