Ask HN: Why is software quality always decreasing?
161 points| timhrothgar | 4 years ago
I've pondered this problem quite a bit. I initially perceived it as the result of engineers compromising in the face of business pressure or engineers making mistakes due to lack of experience or foresight. But as I've become more experienced (and worked as a manager), it seems like a tech business problem. Tech businesses face undesirable situations that result in low quality code as a side effect. Examples include critical employee departures, hyper growth, critical customer demands, and even changes in government regulatory requirements.
Can this be avoided for codebases that are old and large? Does anyone know of examples of codebases (public or private) that have maintained a high quality codebase that is large, old, or supported by a large number of contributors? If so, how is it done?
jandrewrogers|4 years ago
This was the right choice in most cases. The software from a few decades ago was inferior in almost every way to the software we have now for solving the problems we need to solve today. Most software does not live long enough to be "high quality", or it lives so long that its original design assumptions become obsolete and therefore less useful.
timhrothgar|4 years ago
tharkun__|4 years ago
I started out my career working for a large company (~10.000 employees in my actual company and the entire corp had >100.000 employees even back then).
I would probably agree that what you say is true for most software. Now why I find this curious is because the piece of software I was working on I think exhibited many if not all of those symptoms. It was old (about 15 years at the time, which by now is about 15 years ago and I'm pretty sure they still use a lot of that even though I have no way to check). So by now 30 years. It was archaic in many ways even back then. For ** sake, they were using CVS, they fixed stuff directly in Prod (scripting FTW, yay! </sarcasm>) and the code stank! Fixed width formats all over the places. You were lucky for finding delimited interchange formats. Live interfaces? Forget about it! Scheduled nightly runs! Unfortunately I can't say where I worked or on what but you would all recognize (and even your non-techie friends) some of it but not other parts (because the company ownerships aren't widely publicized and it's not anything that's "usually" on HN).
Inferior? In some ways yes, in other ways absolutely not at all. Their database and application was able to do things you could only dream of. This was (is I guess) an application that is/was used 24/7 all around the globe since ~30 years ago by now, which had excellent backwards compatibility (usually going back about 3-4 versions of the DB schema - full version history on all tables). Of course this was all in-house, not out there on the web. Even though the code stank and I cleaned up a lot of the parts I worked on in my years there, it was a marvel of engineering if you think about that. Some of what they were able to do 30 years ago, I still can't do at my current company! On the other hand, some things that are completely normal at my current company were and probably still are unthinkable in that place!
I am really thankful for having worked at that company as my first job. Even though at the time I probably didn't fully recognize this, it taught me a lot of perspective that I can use to this day to inform my thinking and decisions and ultimately helped me end up in the position I have right now. This was a team of 6 with myself and the team lead included. This was for the FE (fat client in a language many people won't even recognize or only by name) and the BE (scripting languages of various types).
john_moscow|4 years ago
It is somewhat similar to all other economies of scale. Like AdWords, where you are one click away of being connected to thousands of ad publishers, but if the algorithm says "no", you get banned and will never get a chance to talk to a person. Like modern electronics where you can buy a device assembled in Philippines from parts made in China and designed in the U.S. for a fraction of a price of making it locally, but if a single $0.08 capacitor blows up, you're stuck throwing the entire thing out because the pipeline is not optimized for repairs.
PragmaticPulp|4 years ago
If anything, it feels like common softest quality has trended upwards as software engineering learning materials have become more widely and freely available across the internet and we have so many more open-source projects to learn from.
> The one thing that's been pretty consistent is excessive technical complexity (aka tech debt).
Technical complexity and tech debt are two entirely different things. Do you think it’s possible that you’re simply missing the “good old days” when computers did less, Software expectations were lower, and it was possible for a tiny team to completely understand and operate a useful software package by themselves?
trinovantes|4 years ago
timhrothgar|4 years ago
You make a good point though. Perhaps I just miss the good ol' days of working on small teams with small codebases that were pretty easily maintained.
PaulHoule|4 years ago
cblconfederate|4 years ago
If the quality of desktop software was going up, it would be doing more than it did in 1997. I don't think it does, in fact it is being dumbed down to look like mobile software (which is inherently lower quality due to limited UI). So how come the quality is going up but the user benefit (i.e. time it saves) has not changed?
gizmo|4 years ago
Nowadays there is no such filter. Anybody can get something to barely work by copy-pasting from stackoverflow. This isn’t a negative by itself, it just means that we now have many professional programmers who never had to try hard. And quality goes down as a result.
Code quality tends to be the worst in areas with very low barriers to entry (web stuff and such) and very high in domains where you need to be a good engineer in order to get anything done at all.
pornel|4 years ago
I'll argue that StackOverflow has improved things a lot. Now you can search for your problem, or publish your code somewhere, and people will gladly tell you how wrong you are. Before that could be wrong and keep writing spaghetti code off-line while being blissfully unaware how horrible it is.
I'll also question whether low barriers to entry lower average skill. Low barriers to entry mean increased competition, which creates a pressure to differentiate yourself. OTOH with high barriers to entry, once you get in, you can stay complacent, because you're irreplaceable whether you improve or not.
phendrenad2|4 years ago
What we have are social problems, people problems, "disruption" to existing industries, etc. This is software "eating the world" (because it ran out of software problems to eat). This is why every startup pitch is "We're disrupting the commercial real estate loan industry" or "we're disrupting the mine subsidence claims industry". Just absolutely insane hail-mary startups trying desperately to find some niche that hasn't been invaded by software already.
redisman|4 years ago
timhrothgar|4 years ago
wonderwonder|4 years ago
This is all my anecdotal experience, I could be totally wrong in the grand scheme of things.
GianFabien|4 years ago
davidw|4 years ago
I read a wonderful article that went into this in some depth a while back but can't for the life of me recall where.
Basically, if you're doing some kind of NASA mars rover software, you go over it again and again and are really careful and all that costs a lot of money. It also means you have fewer features and it all takes longer. If you tried to use that sort of process on some banal bit of everyday software, it'd be way more expensive than the competition and have fewer features. You'd go out of business.
I also agree with the other commenters that quality hasn't really declined over the years.
Wowfunhappy|4 years ago
There are economy of scale issues to consider though. I use OneDrive because everyone else at my company has files stored in OneDrive. Still, I think my company collectively looses a massive amount of time due to OneDrive crapping out, and I think we'd pay more if there was a solution that was really rock solid.
The problem is that no one is doing it. iOS and macOS are on yearly update schedules in which Apple introduces half-baked features just to remove them again a few years later, and Microsoft won't offer Windows LTSC to consumers. I retreated to an eight year old version of OS X and it's great, but I'm also crazy, most people can't do that. :)
streetcat1|4 years ago
1. The software creation process moved from designing software to "growing" software. With iteration time moving from months to weeks. So there is no conceptual integrity.
2. MVP turning to final products. Basically quick and dirty code become the foundation of the architecture, with no time to refactor.
3. Short tenure time. I think that avg tenure time for young developers is less than a year. Hence, the knowledge of the code/ domain / abstraction is lost.
4. Market value speed over quality. Software managers are compensated for delivery and not for quality.
vsareto|4 years ago
5. Planning - your abstractions sometimes don’t make sense in the face of new requirements. If the people planning the features of the product don’t do their own thinking, you’ll be guessing at what they’re going to do next. When that’s the case, it’s up to a dice roll to see if your abstractions will work nicely with the future requirements.
Gigachad|4 years ago
Most of the time as a software dev company, you are better off building as much functionality as possible and just keep things working enough that people can deal with the issues.
timhrothgar|4 years ago
PaulHoule|4 years ago
Steve Wozniak 'coded' Breakout in 45 chips on a circuit board, then coded it in assembly and coded it again in BASIC to prove somebody could.
There was a limit to how big of a program you could fit in an Apple ][.
A modern game could have a development budget more than a Hollywood movie and fill a whole Blu-Ray disc.
From one perspective it is a miracle of progress that the modern game works at all.
sowbug|4 years ago
But simpler has its cost. Whenever I fire up an emulator to relive my childhood Apple II days, I'm amazed how crude that stuff was. I remember magical worlds and incredible, fast animations. Today they're just flickering 16x16 blobs.
majkinetor|4 years ago
The best you can do is to have automatic tests, lots of them, and make them work as intended (good tests are very hard to make). Those make refactoring possible and make specific quality guaranties.
burrows|4 years ago
I don’t know much physics, but this word-soup about entropy is mistaken, isn’t it? Because something something about closed systems vs open systems and putting energy into systems (eg, people working on the codebase)?
> The best you can do is to have automatic tests, lots of them, and make them work as intended (good tests are very hard to make).
I’m not sure what “best” means here, but there are other tools for improving software quality beyond tests, including and in particular formal methods.
> Those [tests] make refactoring possible and make specific quality guaranties.
What quality guarantees do tests make? That the build passes the test suite?
hogrider|4 years ago
wanderr|4 years ago
Now I try to make sure my team has time allocated to paying down technical debt, but most of the time that is enough to keep the situation from getting worse, more than really improving the situation.
daniel-cussen|4 years ago
spenczar5|4 years ago
But this means bad code changes more rarely and more slowly, because it is hard. So the good code gets most of the modifications. Now, even if most of the time the modifications maintain quality, they certainly sometimes turn good code into bad code.
This is a system that gradually chews up your good code and turns it bad, until you have a big nasty mess you throw out, and start over again.
bob1029|4 years ago
For me, this is a game of perceptions where crappy software becomes popular and profit motive makes it risky to attempt iterations. No executive working at Microsoft is interested in the liability that would come with a win32 rewrite of Teams, even if the UX would triple in quality overnight.
If anything, the quality of most software should be dramatically better than ever before, especially when you factor in the relevant tools & ecosystems.
ajkjk|4 years ago
One big thing I think is overlooked is: the kind of people who start projects that succeed tend to be good engineers, and the kinds of people who jump onto them later tend to be less good. Not bad, per se, but just not as remarkable. Usually these people are plenty good at the business's needs: shipping features, fixing bugs. But not at the kind of holistic, visionary, motivated work required to unfuck a massive project.
IMO people who are mediocre programmers (which I kinda count myself among? I'm trying to be better but it's hard to keep the motivation up) don't understand how much better at this the really good engineers are because they're almost never exposed to them. You don't write code alongside the best programmers at your random corporate job, because the best programmers don't work there (or if they do, they usually aren't writing much code). The senior engineers are senior because they're adequate programmers and excellent shippers of products. Etc.
analog31|4 years ago
Thankfully the Shadow IT Department has lower standards. ;-)
aayala|4 years ago
nonameiguess|4 years ago
A few factors that I think contributed to the high quality:
- Waterfall development. It may be wrong for fast-moving commercial applications with fickle user bases, but we were targeting hardware capabilities we knew in exact detail years in advance of them launching into orbit and becoming usable, so taking the time to write out detailed specifications and requirements and tailoring verification testing to these structured the development. No chaos. Everything had a clear purpose with a really obvious way of telling whether what you did was right or wrong.
- Dedicated testing teams independent of the dev teams. When their entire job is to find bugs and break stuff, it makes a difference.
- Very little library use. This wasn't really a "build versus buy" decision per se so much as the program had existed for such a long time, that at the time we solved most problems, we were legitimately the first to do it, but since the code base was entirely classified, we couldn't release our own work as libraries. The upside to this is all of our work was dedicated to actually writing and testing code, with very little work dedicated to managing dependencies, and the developers understood what everything was doing, including low-level functionality like memory and thread pools, filesystem drivers, and scientific functions like coordinate transformation, ground to orbit projection. Nothing was a black box. We understood our system because we wrote it.
- Continuity. The lead technical people from research scientists who developed all the algorithms to software architects that came up with data structures and class hierarchies, were often 30+ year veterans that had never worked on anything else. They were the world's foremost experts in what we were doing, so they were good at it.
- Effectively no corporate level management interference in what we were doing. The work was all classified and the top suits aren't cleared to know what we're doing anyway. We're either costing more than the contract pays or we're not and that's all the really matters. They can't micromanage if they can't even get into your building.
throwhauser|4 years ago
GianFabien|4 years ago
But many have awkward, confusing UI are brittle in face of required changes and the documentation out of sync with the code which often has confusing and contradictory comments.
The management is always screaming for the latest fix, change to be made ASAP. So the programmers do their best in adverse conditions. Any attempt to do a proper job is a career limiting move.
pbalau|4 years ago
FreeBSD, OpenBSD, the Linux kernel
timhrothgar|4 years ago
cies|4 years ago
If you have written GUIs in C/Win, C++/Qt, Java/whatever, and JS/React (or Vue) you should have an idea of what went wrong. That's just GUIs.
Bad code is always going to happen, when you make stuff will you regret some choices. That's a given. But at what cost you can refactor your way out depends a lot on the language you have used.
Now there are languages that make it harder to create a mess that is hard to refactor yourself out of. They are languages to, you could say, optimize for refactoring by using strong typing. Rust, Haskell, Elm, OCaml/ReasonML/Rescript, Kotlin to some extend.
But non of them are mainstream.
The IDE assistance I got with refactoring C++/Qt in 2001 is still miles ahead with what I get with JS/React (or Vue) 2022.
I recently did a Elm app on an automatically generated GraphQL API on a PG db with Hasura. I could auto generate typesafe bindings to the GraphQL schema in Elm. This was the first time I felt C++/Qt (or Ruby/Rails) kind of next-level powerful again. Type safety from db schema, through the API, to my frontend/UX code in strongly typed Elm.
So I think it is improving, but not so much in mainstream languages.
nlfire|4 years ago
That phase of my career was very rare. Yes, I have gotten periods of time where I get to pay down technical debt, but mostly the bosses/employers just want me and my colleagues to move on as fast as possible to start the next project. They don't care how many bugs are filed against the old project, and we'll just squeeze in the critical ones.
The go-go-go attitude is what wears me down and makes me want out of the industry. I want to feel like I finished something. Not perfection, but finished you know? That there isn't a mountain of bugs I never even looked at?
I don't think this is something new however.
arnvald|4 years ago
Keeping codebase well maintained requires some effort and often it's very tempting to cut a corner here or there, or to focus on adding new features instead of keeping dependencies up to date. Initially these shortcuts don't have a big impact, but at some point you notice it all became messy, and now it's hard to bring it back to shape.
smoldesu|4 years ago
Nowadays, people don't care. The 90s ruined us with it's impulse economy, society as a whole felt as though we were entitled to just the good parts and nothing more. As a result, software got developed that way. Fine Corinthian leather, Gaussian-frosted glass and lickable scrollbars won out against dependable, powerful software interfaces. Society doesn't want good software, they just want to feel good. People can leverage that desire to make a lot of money by selling mostly-satisfying software. Stay hungry, stay foolish?
KerrAvon|4 years ago
An old Unix text interface isn’t going to allow me to edit a video or run a modern business. We do move on for some good reasons.
moonchrome|4 years ago
Put 10 senior developers in the room, ask them a question about those practices. You'll probably get 8 variations on a mainstream approach and 2 alternative approaches.
As I'm often switching around stacks I'm shocked how lacking automated tooling is at enforcing basic formatting guidelines in some languages, IMO JavaScript prettify is the best here - it offers so practically no room for subjective tweaks.
For example I'm currently working in C# and it's so much worse in this regard, formatters between IDEs aren't even compatible.
So if something as trivial as formatting is this hard what do you think enforcing style guides and other quality metrics takes ?
In a small young team you can have one or more people steering the ship but when they move on or add more new people even in the best case these guys are trying to guess what the previous team would do to maintain style
richardwhiuk|4 years ago
tcgv|4 years ago
alkonaut|4 years ago
The reason you see the buggy and bloated software is survivor bias I think. The software you see is the software that survived long enough to become bad. That programs grow old and bloated is a testament to that someone uses them. This was always the case too. Software was definitely not any better historically.
menotyou|4 years ago
What I need is a software which does have the required functionality. What I don't need is a bloatware which has errors in all function.
As a user, I hate CD/CI. After I just get used to the old functionality, suddenly my buttons moved, my function I rely on behaves differently. I have to start to learn again instead doing my productive job.
marcinzm|4 years ago
Go look at the business teams of any company and you will find a mountain of excel floating around. Magic excel that does magic things that no one understands anymore. Processes that make no sense but are ossified in place over decades. And so on.
This is a problem of having many people working together without an actual unified goal and nothing to do with tech. Large corporations are inefficient and slow. They are called dinosaurs for a reason.
tester756|4 years ago
The thing is
if you took 3 x 10 years of experience engineer - one in web app backends, one heavy FP programmer and 3rd one in e.g kernel programming, then they'd have different definitions of "good code"
for web apps it's probably a lot of abstractions/indirections, DDD, patterns, heavy OOP, testability
for kernel code there's nothing wrong with gotos, ugly hacks for performance, 10 meters of ifs, stuff like likely/unlikely
Beltiras|4 years ago
I'd love for some more contract work in this vein but I am unsure how to find it.
shime|4 years ago
1) The tension between leadership that usually only prioritizes shipping new features and engineering that wants to write sustainable code. There is usually no incentive for engineering to decrease technical debt or increase sustainability, as these changes are usually not tangible to leadership. The only short-term beneficiaries of one engineer's code quality improvements are other engineers, which often don't care that much, as they are busy shipping new features as quickly as possible.
2) Speaking of short-term, not all leadership thinks long-term. Sustainable code makes sense if you're thinking long-term, but not if leadership is chasing lucrative exits in a couple of months. If a project survives for 20 years, chances are it survived multiple leadership changes, which all thought short-term.
3) In software, the only constant is change. As project gets older, it has to deal with all the changes that aging entails. Aging means increased exposure to reality and all the "surprising amount of detail"[1] it contains. This increases complexity, introduces bugs, adds edge cases to deal with, etc.
[1] http://johnsalvatier.org/blog/2017/reality-has-a-surprising-...
pietromenna|4 years ago
Those two factors are a reality now a days: people leave teams when they get experience and go to a place where they have 0 experience (knowledge is lot) and they built everything under pressure to deliver to meet market demands (so we rush to deliver features).
I would also tell you that in the past well organized projects were exception not the rule.
unknown|4 years ago
[deleted]
sershe|4 years ago
In a large complex system, say something written lovingly with best practices by a guy who has left 3 years ago, needs to be modified. The new dev doesn't have a good mental model of this code, or maybe has no model because it's the first time anyone still on the team is looking at it in depth. Even with best intentions, they tend to work "around" the old code with less modifications than the original developer would have made to properly integrate the change; instead they put stuff on the "outside"... aside from additional code rot/clunkiness, this also results in fragile dependencies that are even harder to untangle and clean up, and perpetuates the cycle.
Joeri|4 years ago
And this is where you come in, on an old ugly codebase with perennial quality problems and bad development practices, stuck in a hole and trying to dig its way out. Eventually it will get replaced by a fresh new project by a fresh new team, who just know that this time it will be different.
karmakaze|4 years ago
Video games is the most recent to undergo this transition. Up until not too long ago, physical DVD or Bluray media was the standard distribution mechanism. There were downloadable patches available post launch, but their sizes were originally somewhat limited to the write storage capacity. Now digital distribution with broadband connections is very common.
The softest of all are cloud software sold as a service. Any bug can be fixed for all users by updating the servers with a CI/CD deployment in sometimes minutes. With this increased softness, the product is always in development and there will always be new areas that have some usability issues as well as stable parts that have been worked out. But at any given time, there could be a fixed/large number of bugs if you were to count every one. Look in the reported issues for a popular repo that you thought was stable software--most of the time, most issues don't affect your use cases.
Long story short, it's what other comments say: it's economics and optimization. Of course there's also bad development and release policies which amounts to poor management or engineering culture, but I can't say that this is in excess to what's economical in my experience, except perhaps in the rare cases I ran away from.
duped|4 years ago
I think your experience has some profound survivorship bias too. The shitty codebases last longer, which should be incredibly telling about the value of code quality to a business.
bryanrasmussen|4 years ago
timhrothgar|4 years ago
cercatrova|4 years ago
https://en.wikipedia.org/wiki/Andy_and_Bill%27s_law
https://en.wikipedia.org/wiki/Wirth%27s_law
HellDunkel|4 years ago
ppeetteerr|4 years ago
A rule of thumb is that you'll always have at least 2x or more code than engineers to maintain it. When setting priorities, you have to be pragmatic about which code to update, and which features to build.
That's probably why you're seeing so much old code.
By the way, as a freelancer, you may also be hired to maintain the ugly bits of a system. Full time employees generally work on building value, whereas freelancers are hired to take care of things that people generally don't want to hire full time employees to do.
nickm12|4 years ago
That said, I think it's only slightly better, and not as good as it should be given our tools and (collective) experience. I chalk this up to the reasons other people have given—mainly the business of software doesn't favor clean code and the majority of code is written by new developers who are still developing their skills.
feim_2022|4 years ago
Some software companies realize that quality is important for their business. And they do this right.
System software providers - do this well. Examples are Amazon's S3 Store, leading relational Databases, and even embedded software such as Arista EOS that Arista runs in their networking gear --do have very high quality.
The reason we don't see it all around us is that most well written software is invisible to us. We don't really think about the iOS software on our phones -crash rate is extremely low.
menotyou|4 years ago
Every of these things bring multiple tools, frameworks, design patterns, convention and standards, which are all hyped for a short time until everything breaks down under the additional layer of unmanagable complexity until the next hyped toolset is introduced promising to solve all underlying problem, but finally adding another next layer of complexity.
Later you can't can find anyone who can maintain codebase developed using a framework hyped five years ago.
Example for large, old, good, maintainable codebase suported by a large number of developers in the world: ABAP (Sorry, no github, no open source. And sorry again: Imperative language).
And the silver medal goes to............................: SQL.
tmp_anon_22|4 years ago
timhrothgar|4 years ago
unknown|4 years ago
[deleted]
jimmont|4 years ago
behnamoh|4 years ago
BatteryMountain|4 years ago
At this point I'm considering only joining startups with greenfield projects so I can have more power to keep things simple. Joining projects that have been around for a while has become tiresome. It's always the same kind of messes too and they are always abandoned. I wish developers would stop building things and then running away. It's your mess, your name is on it, clean it up before you leave.
cryptica|4 years ago
There is a technology mono-culture, heavy in dogma and censorship.
I suspect that is what happens when prominent developers who happen to be working for a financially successful company like Facebook, Google or Microsoft end up promoting and heavily pushing their favorite tools onto everyone down the hierarchy and company boundaries (as if they were silver bullets) and censoring nuanced, rational discussions in the process.
rmk|4 years ago
In comparison, Software is a very young field, and is not regulated at all. Open any EULA and you will see that you are indemnifying the vendor from any sort of liability to the maximum extent possible. Further, software is only loosely limited by physics: almost anything that you can imagine can be built using software, so software creations are incredibly complex, and are all man-made, with extensive combinatorial inputs that simply can't be tested or are not feasible to test economically. Also consider that fact that software is almost always purpose-built and not comparable to a mass-produced artifact such as an IC engine, or a girder, or a length of rebar, or a chemical reagent. So the methods of Industrial Engineering, which concern themselves with quality control in the face of uncertainty do not neatly apply to the software world. The rate of change of software artifacts is just mind-boggling: once a bolt is produced, that's it. It doesn't need to be continuously maintained, updated, or improved, because it simply doesn't possess the malleability of code. If you fasten it with the right amount of torque, you are probably good for an indefinite length of time and can expect to simply forget about it. The software equivalents of nuts and bolts, such as compilers, are nowhere near as simple as nuts and bolts: they are whole universes in their own right, and every detail of that universe is man-made, conceptualized from scratch.
I could go on, but given that software is so complex, it is to be expected that it doesn't hold up to your notions of quality.
If you want to look at large codebases that have maintained a high quality, the obvious recommendation is the Linux Kernel. Another is the Go Standard Library, which is very readable and well-written. I think musl, a minimal libc replacement, is also considered very well-written (but perhaps it's not a huge codebase). Postgres source is also widely known to be pretty good for such a complex codebase that is worked on by a large group.
softwaredoug|4 years ago
If last decade only seasoned professional bakers baked bread, average bread quality would be amazing.
If this decade, _everyone wants to bake bread_, average bread quality would be pretty bad.
wesapien|4 years ago
salawat|4 years ago
Look at the Space Shuttle's code. Look at the code and systems to drive anything where any level of assurance must be maintained.
Quality is hard. It is all encompassing, and for all too many optimizing decision makers, it is "The crappiest thing I can sell without looking like a nitwit, or killing somebody, that takes the least money to develop or doing something so blatantly illegal I can't get the lawyers to realistically dig me out.".
The rest is noise.
deathofsocrates|4 years ago
dustingetz|4 years ago
In startups this is exaggerated further – you need to double with each capital milestone, or you starve to competitors and die. Grow your valuation faster than you grow your technical debt and you can buy your way out with armies of engineers. Smash your competitors with money or buy them outright.
In enterprise, Microsoft software quality oscillates in waves and this works because of their mortal lock on distribution. MS Teams can get away with being way worse than Slack and Zoom. The result is MS can just push early stage trash on us and get away with it long enough to backfill the quality as the next technology wave crests.
Growth and distribution matters SO much more than tech.
menotyou|4 years ago
What users need is a stable (no CD/CI) software which has the function set they need to do their business.
strictfp|4 years ago
Adding more manpower works around an inefficient codebase by writing more code on the side, and so we get a growing Rube Goldberg machine, no matter if we do monolith or 4000 microservices.
azth|4 years ago
rvr_|4 years ago
unknown|4 years ago
[deleted]
supertrope|4 years ago
duxup|4 years ago
When I started playing with computers an error often meant a hard crash of the whole system.
Now an app dies (rare) and I just restart it and often it comes back up just where I left it.
That’s a very specific example but I certainly prefer it to the past.
drewcoo|4 years ago
If you dig for documentation, for actual proof of your claims, I bet the answer(s) will be more obvious.
zqna|4 years ago
ramesh31|4 years ago
jamjamjamjamjam|4 years ago
tschellenbach|4 years ago
I think the solution is to build less and reuse more.
galkk|4 years ago
svilen_dobrev|4 years ago
there is constant/limited ability to grasp things, but the things and complexity that (piece of) software represents, grows..
aaccount|4 years ago
tored|4 years ago
bluepoint|4 years ago
GhettoComputers|4 years ago
tl;dr
Software efficiency is traded for time efficiency, you see this less with hardware constraints like embedded systems programming. As hardware capabilities increase, efficiency is less necessary. I see a reversal of this with rust (ripgrep is amazing), web assembly, while making programming easy leads to poorly optimized software like electron wrappers.
riskneutral|4 years ago
Which brings me to my point that "software quality" is something different. "Code quality" is what programmers obsess over because it determines the quality of their lives. But "software quality" is what defines the lives of their users. I would define high quality software as being performant, efficient, bug-free, secure, correct, usable interface, etc. Fortunately, these objectives can be met without necessarily needing very high code quality under the hood. The core banking software keeping track of your bank balances, or the autopilot software on your next plane ride, are probably written in in some terrible legacy of COBOL, FORTRAN, C, C++, etc, and probably have huge amounts of technical debt. But the objective quality of the software from the user side is very good, and once a critical software component has been written and tested, the preference is to not change it (if it isn't broken, don't fix it). As long as your bank balance shows up correctly and your plane doesn't crash, you don't worry about the underlying code quality at all as a user.
So, in summary, the bad news is that you can never prevent code quality degradation. Any large, growing and aging system will inevitably lose conceptual integrity, have poor code quality, will come with a mountain of technical debt, and will get harder and harder to modify and grow over time and scale. The good news, however, is that you can still ensure quality of the product for the end-user by throwing enough people, grind and money at the problem.
timhrothgar|4 years ago
As such, perhaps its an unworthy goal to maintain high code quality over the long-term.
unknown|4 years ago
[deleted]
sys_64738|4 years ago
timhrothgar|4 years ago
hogrider|4 years ago
StewardMcOy|4 years ago
The first is that best practices, even if unanimously agreed upon, don't always survive contact with the user. Users will use software in ways that you didn't intend, and their usage patterns may expose bugs or have deleterious performance impacts on your code.
For example, if you're using a functional core, imperative shell design, you get all of the stability and ease-of-reasoning benefits from immutability within your functional core, but users may need to update part of your data model very frequently that you expected would seldom need to be changed, and the way the code is designed, changing this part of the model triggers a very expensive rebuild of the world. At that point, you're either forced to completely re-architect or come up with a clever hack for this one specific use case.
And re-architecting isn't always a guarantee of success. I once worked on a data warehouse system where the strong ACID properties of the database, along with the way the data was segmented ended up causing server with the specific hardware we were using. It's been decades, so I don't remember the exact issue, but something about issuing specific sequences of reads, seeks, and writes over and over caused and issue with the disk buffer, and when the OS periodically went to read data it needed, it ended up with our app's data instead. It was something that could be solved with a different server, but at the time, there was no budget for it, and a migration to new hardware wouldn't be possible until after the holiday retail season, so we ended up having to store some customers' data in files on disk rather than in a proper database. Then we had to change the schema in response to new requirements, so by the time we migrated to new hardware, reconciling the divergent schemas between the database and the database and the files was a nightmare, and it might never have gotten done without some serious politicking, which still took a couple years.
The second reason is that development environments change over time, and invalidate a lot of the assumptions apps are developed on. It's not just frameworks and libraries, but also languages.
I once worked on a Python 2 web service that did heavy text processing and had to make extensive use of Unicode. Just due to the history of how Unicode and Python developed in parallel, Python 2 had some eccentricities when it came to Unicode support. We understood all of these well and were able to develop a well-tested codebase that abstracted away these issues. Python 3 completely changed how the language handled Unicode, and the result on our project was disastrous. We essentially had to rewrite everything to make it work, but it was so messy we threw it all out and completely redesigned it. I can imagine a lot of companies wouldn't want to make that investment, especially since Python 3's Unicode support, while better, is still quite clunky compared to other languages. It's a hard sell to tell your boss you built something on top of broken Unicode support, and now you want to rebuild it on top of a different broken Unicode implementation. They'll just ask if you'll need to do that again in ten years.
supperburg|4 years ago
worthless-trash|4 years ago
temptemptemp111|4 years ago
[deleted]
devwastaken|4 years ago
There's a reason the system is setup this way. It's not an accident, it's intentionally designed to take kids that follow orders, load them with debt and make them work for less - this is how they are able to suppress workers banding together and save big money on salaries. Those that are actually indespensible get the big pay and certainly won't complain.
If we want quality we have to start making the companies pay their share, be willing to remove worker and education visas, and overall lock U.S. companies into the north american and European market. These corporate giants are misbehaving children that we have to put our foot down with.
Follow the money.
rocknor|4 years ago
[deleted]