I was explaining this to a friend who's a top-shelf cabinetmaker.
He was telling me how he would sell high-quality cabinets to homeowners, basically by building a "dream kitchen," that far exceeds their budget, then backing down, by removing features, until they have something that exceeds their original budget, but would still be quite good, and that they want.
He was saying that I should use his methodology.
I explained that his sales are to the people that would actually use the cabinets, so they have a vested interest in high quality.
In my case, I would be working for a business that absolutely doesn't care about quality. They want the cheapest, shittiest garbage they can possibly get away with pushing. They don't use it, and many of them won't even be responsible for supporting it, afterwards, so there's no incentive for quality.
The same goes for corporations that don't try to retain talent. If someone is only going to be around for eighteen months, they are paid well, and they are pressured to produce a lot of "stuff," then you can expect them to produce a lot of ... stuff. They don't give a damn about supporting it, because they are already practicing LeetCode for their next gig.
I have found that giving people a vested interest in Quality is essential. That often comes from being responsible for taking care of it, after it is released, using it themselves, or having their reputation staked on it.
I don't feel the current tech industry meets any of these bars.
Most of the software I write, is stuff that I use, and I am intimately involved with the users of my work. I want it to be good, because I see them, several times a week. Also, I have a fairly high personal bar, that I won't compromise. I have to feel good about my work, and that doesn't seem to sell.
When I started at Oracle yonks ago, there was a bizarre bug management system.
When a bug was found, it was assigned to the next available developer in the team. It didn't matter who wrote the code and created the bug - there was no feedback to them unless they happened to be the one who picked up the bug report.
The bug reports were printed out and stood in a tall pile on a manager's desk.
Quality was, as one might imagine, terrible. Junior developers had no idea what a bad job they were doing - senior developers spent their days fixing stupid bugs they would never have caused themselves.
The solution, blindingly obvious, was to start assigning bugs to the developer who caused them. The improvement was instant, because most people actually want to do a good job, and to be seen doing a good job. The pile of bug reports literally shrank before people's eyes.
The current industry seems to have moved back to these bad old days but on a longer timescale.
Resume-driven development abounds. Developers move on to the next gig before the impact of their decisions becomes obvious and quality plummets accordingly.
Devs leave in 18 months because their market value increases faster than their pay at their current company.
You can give devs a vested interest in their work by making sure compensation tracks/exceeds what they can get outside, because it would give them a vested interest in remaining employed with the company.
Seems like building and selling your own software (aka solopreneur) would help you to build capital at the same time as it also forces you to write good code.
I don’t know. If I have learned something in the last decade about software engineering and quality is: business only care about revenue and speed, while engineers don’t have an objective way to define quality (10 engineers would give you 10 different opinions about the same piece of code).
The only moment I consider quality as the top priority is when I work on side projects (because there’s only one opinion about what quality means, because there’s no time pressure and because no one is asking me “are we profitable now?”)
I agree, and take it a little further. 10 engineers couldn't agree on the _point_ of quality code to begin with, let alone define how to get there. Consider two programs:
1. The spaghetti mess, half-done abstractions, inconsistent uses of anything everwhere. But accomplishes the users expectations perfectly
2. A beautiful codebase, clean abstractions, tests and documentation everywhere. But the user hates it. It's slow, requires some domain knowledge of how to drive and get result.
Two very contrived examples, but not unrealistic examples.
Intuitively, better cabinet quality leads to a better cabinet experience. Does better code quality lead to a better product? It should, that's what quality is about. And if not, is "quality" even the right word?
Whenever I hear an engineer talk about quality, I clarify. What kinda of quality are we talking about?
> 10 engineers would give you 10 different opinions about the same piece of code
This plays out in code review in a way that drives me insane. So much back and forth and time spent/wasted because there's always that one person or small group of people who insist their way is the one true one.
Different ideas are of course important. But companies that take every idea seriously usually have as many ridiculous problems as there are ideas. So not every idea may be really good. Accepting this liberates a person.
'Eating your own dog food' is the best path to quality software in my opinion. Too many people working for a software company (developers, salespeople, product managers, etc.) never bother to use the software to do the kinds of things they expect their customers to use it for on a regular basis. Write the code. Make sure it passes some tests. Move on to the next project. This is common.
No wonder so many bugs never get reported unless many customers run into it much later. I have a project I work on regularly. I use it regularly to do productive things and I find most of the bugs just doing that. I had a couple different 'business partners' who talked a good game, but I could not get them to actually use the software and give me feedback on how to improve it. Neither one added much value to it and quickly moved on to other things.
> Write the code. Make sure it passes some tests. Move on to the next project.
Let's mention the missing step: don't even bother to run the code.
I'm simply embarrassed to admit how often I've been in teams that not only "don't use the software" (i.e. no dogfooding) but even "don't run the program". It's embarrassing. These types of teams miss bugs that get shipped because not one of the people involved in making that software has ever even actually run the damn app, let alone actually used it for any length of time.
This is shameful and embarrassing. Our profession is a joke. How can we even call ourselves professionals?
Another underappreciated effect of dogfooding may be its reduction in bloated functionality.
If you're not dogfooding, you rely harder on a mental user model. Just conjecturing -- not only does that model diverge across your organization, but it could result in more top-down decisions about what a user wants, which probably creates more politics and friction all around your teams.
An issue I've encountered is that all of those non-developer people you mentioned generally don't eat the dogfood, even if they push the idea of dogfooding themselves.
They assume (or don't even pause to think about it) that developers eating the dogfood is enough.
At larger companies, "eating your own dogfood" only works well if people with power to make roadmap and time-allocation decisions also eat it.
Dogfooding was popularised by Microsoft, which AFAIK is still doing it, but it seems these days it's more like they're just being force-fed the dogfood without having any actual power to change it.
Dogfooding is good, but I wonder to what extent it has become the case that problems that programmers have (solved by software that programmers will use and can easily evaluate how it works) are already solved pretty well by open source programs. I mean, imagine trying to sell a compiler. Good luck.
If you want to sell software, maybe one of the biggest markets to play in is software that programmers don’t find interesting to write and use?
There’s clearly not a 100% overlap between problems that programmers find interesting and open source projects. But it is applying not so favorable filter, right?
Sounds obvious in theory but the majority of applications are targeted for a very specific audience, i.e. banking, freight forwarding, CRM even. Not to mention if you work at a mid+ size company you'll be working on a piece of the application. Good luck trying to use that in your day to day life.
> For example the telephone system and the Internet are both fundamentally grounded on software developed using a waterfall methodology
Is this true? I can’t speak for telco, but I thought the internet in particular was developed incrementally with a lot of experimentation. I mean, yes, the experimentation resulted in RFPs and STDs. But I thought these generally came after the software was working. And as someone who has implemented a few RFPs, I would not say my approach was remotely waterfall.
Indeed my perhaps incorrect version of events is that the waterfall approach is represented by the big loser in telco, the ISO OSI.
> Here’s a little-known secret: most six-nines reliability software projects are developed using a waterfall methodology.
I've designed and deployed Tier 1 services for a Big Tech company, and here's is a little-known secret: when nothing changes, our reliability is higher than six-nines.
Last year I measured our uptime during Black Friday for fun. Our error rate was measured in scientific notation because the number was so small. We didn't do any deployments or changes during that period.
When you operate in a steady state it's easy to achieve zero errors, and most downtime comes from random failures in hardware, i.e. servers crashing or network blips (which, operating at scale, are relatively common).
So my and other's personal experience is that most outages are due to changes in the software, dependency outages, or the rare large scale event that completely kills your SLA (e.g. a whole AWS region is down). Taming these is at the essence of reliable software.
Whoever tells you that the best software is made using waterwall methodologies from a fixed and never changing set of specifications, lives in a fantasyland alien to the vast majority of developers.
ChrisMarshallNY|1 year ago
He was telling me how he would sell high-quality cabinets to homeowners, basically by building a "dream kitchen," that far exceeds their budget, then backing down, by removing features, until they have something that exceeds their original budget, but would still be quite good, and that they want.
He was saying that I should use his methodology.
I explained that his sales are to the people that would actually use the cabinets, so they have a vested interest in high quality.
In my case, I would be working for a business that absolutely doesn't care about quality. They want the cheapest, shittiest garbage they can possibly get away with pushing. They don't use it, and many of them won't even be responsible for supporting it, afterwards, so there's no incentive for quality.
The same goes for corporations that don't try to retain talent. If someone is only going to be around for eighteen months, they are paid well, and they are pressured to produce a lot of "stuff," then you can expect them to produce a lot of ... stuff. They don't give a damn about supporting it, because they are already practicing LeetCode for their next gig.
I have found that giving people a vested interest in Quality is essential. That often comes from being responsible for taking care of it, after it is released, using it themselves, or having their reputation staked on it.
I don't feel the current tech industry meets any of these bars.
Most of the software I write, is stuff that I use, and I am intimately involved with the users of my work. I want it to be good, because I see them, several times a week. Also, I have a fairly high personal bar, that I won't compromise. I have to feel good about my work, and that doesn't seem to sell.
beachy|1 year ago
When a bug was found, it was assigned to the next available developer in the team. It didn't matter who wrote the code and created the bug - there was no feedback to them unless they happened to be the one who picked up the bug report.
The bug reports were printed out and stood in a tall pile on a manager's desk.
Quality was, as one might imagine, terrible. Junior developers had no idea what a bad job they were doing - senior developers spent their days fixing stupid bugs they would never have caused themselves.
The solution, blindingly obvious, was to start assigning bugs to the developer who caused them. The improvement was instant, because most people actually want to do a good job, and to be seen doing a good job. The pile of bug reports literally shrank before people's eyes.
The current industry seems to have moved back to these bad old days but on a longer timescale.
Resume-driven development abounds. Developers move on to the next gig before the impact of their decisions becomes obvious and quality plummets accordingly.
roncesvalles|1 year ago
You can give devs a vested interest in their work by making sure compensation tracks/exceeds what they can get outside, because it would give them a vested interest in remaining employed with the company.
ambicapter|1 year ago
lloydatkinson|1 year ago
tkiolp4|1 year ago
The only moment I consider quality as the top priority is when I work on side projects (because there’s only one opinion about what quality means, because there’s no time pressure and because no one is asking me “are we profitable now?”)
betenoire|1 year ago
1. The spaghetti mess, half-done abstractions, inconsistent uses of anything everwhere. But accomplishes the users expectations perfectly 2. A beautiful codebase, clean abstractions, tests and documentation everywhere. But the user hates it. It's slow, requires some domain knowledge of how to drive and get result.
Two very contrived examples, but not unrealistic examples.
Intuitively, better cabinet quality leads to a better cabinet experience. Does better code quality lead to a better product? It should, that's what quality is about. And if not, is "quality" even the right word?
Whenever I hear an engineer talk about quality, I clarify. What kinda of quality are we talking about?
theonething|1 year ago
This plays out in code review in a way that drives me insane. So much back and forth and time spent/wasted because there's always that one person or small group of people who insist their way is the one true one.
bcrosby95|1 year ago
It's kinda like ice cream. You can argue if you prefer vanilla or chocolate, but everyone will agree when they're eating shit.
Now business takes this and runs with it: you can't even agree on if vanilla is better than chocolate, so go eat this pile of shit.
pictur|1 year ago
unknown|1 year ago
[deleted]
timetraveller26|1 year ago
[deleted]
didgetmaster|1 year ago
No wonder so many bugs never get reported unless many customers run into it much later. I have a project I work on regularly. I use it regularly to do productive things and I find most of the bugs just doing that. I had a couple different 'business partners' who talked a good game, but I could not get them to actually use the software and give me feedback on how to improve it. Neither one added much value to it and quickly moved on to other things.
routerl|1 year ago
Let's mention the missing step: don't even bother to run the code.
I'm simply embarrassed to admit how often I've been in teams that not only "don't use the software" (i.e. no dogfooding) but even "don't run the program". It's embarrassing. These types of teams miss bugs that get shipped because not one of the people involved in making that software has ever even actually run the damn app, let alone actually used it for any length of time.
This is shameful and embarrassing. Our profession is a joke. How can we even call ourselves professionals?
baetylus|1 year ago
If you're not dogfooding, you rely harder on a mental user model. Just conjecturing -- not only does that model diverge across your organization, but it could result in more top-down decisions about what a user wants, which probably creates more politics and friction all around your teams.
DavidPiper|1 year ago
They assume (or don't even pause to think about it) that developers eating the dogfood is enough.
At larger companies, "eating your own dogfood" only works well if people with power to make roadmap and time-allocation decisions also eat it.
userbinator|1 year ago
bee_rider|1 year ago
If you want to sell software, maybe one of the biggest markets to play in is software that programmers don’t find interesting to write and use?
There’s clearly not a 100% overlap between problems that programmers find interesting and open source projects. But it is applying not so favorable filter, right?
eBombzor|1 year ago
w10-1|1 year ago
What's missing are the fault models with the solutions from 15 years ago:
- how to keep code review from becoming a politicized bottleneck
- deploying continuously to avoid train schedules
- comprehensive security, for both code and process
- ...
DavidPiper|1 year ago
roncesvalles|1 year ago
doctor_eval|1 year ago
Is this true? I can’t speak for telco, but I thought the internet in particular was developed incrementally with a lot of experimentation. I mean, yes, the experimentation resulted in RFPs and STDs. But I thought these generally came after the software was working. And as someone who has implemented a few RFPs, I would not say my approach was remotely waterfall.
Indeed my perhaps incorrect version of events is that the waterfall approach is represented by the big loser in telco, the ISO OSI.
unknown|1 year ago
[deleted]
angarg12|1 year ago
I've designed and deployed Tier 1 services for a Big Tech company, and here's is a little-known secret: when nothing changes, our reliability is higher than six-nines.
Last year I measured our uptime during Black Friday for fun. Our error rate was measured in scientific notation because the number was so small. We didn't do any deployments or changes during that period.
When you operate in a steady state it's easy to achieve zero errors, and most downtime comes from random failures in hardware, i.e. servers crashing or network blips (which, operating at scale, are relatively common).
So my and other's personal experience is that most outages are due to changes in the software, dependency outages, or the rare large scale event that completely kills your SLA (e.g. a whole AWS region is down). Taming these is at the essence of reliable software.
Whoever tells you that the best software is made using waterwall methodologies from a fixed and never changing set of specifications, lives in a fantasyland alien to the vast majority of developers.