top | item 29896937

(no title)

timhrothgar | 4 years ago

Perhaps I should update the question. I'm not referring to ALL software quality. I'm referring to the quality of codebases that are 1) old, 2) large, and 3) supported by many people.

You make a good point though. Perhaps I just miss the good ol' days of working on small teams with small codebases that were pretty easily maintained.

discuss

order

quanticle|4 years ago

Even with those caveats, it's unclear whether the premise is satisfied. The Linux kernel is old (old enough to drink!) large (and getting larger!) and supported by many people. Has its quality declined? I don't think so. It supports more hardware than ever. Kernel panics don't happen nearly as often as they used to. New features (BPF), make kernel programming easier. It's difficult to say that the Linux kernel's quality has declined over time.

peakaboo|4 years ago

People being up the Linux kernel on the same way they bring up Oprah as an example of being successful in America.

99.99999% of software in big corporations will not have even 5% of the quality of the Linux kernel. The reason the kernel has such a high quality is because of Linux being a dictator, training everybody in not always so nice ways to write code that doesn't break anything and that is maintainable.

He cares about the code and he has the status and the mandate to prevent it from becoming shit.

timhrothgar|4 years ago

I had this specific example in mind. How has Linux done it? Was it ultimately due to the benevolent dictator for life (BDFL) management practice?

sllabres|4 years ago

>> Kernel panics don't happen nearly as often as they used to

My earliest kernel I have used in something one might call production are 1.2.12 around 1995. I must say even then, with this early kernels I had no panics at all and much higher uptimes (patching for security wasn't as much of an issue at that time ;-) )

com2kid|4 years ago

> Perhaps I should update the question. I'm not referring to ALL software quality. I'm referring to the quality of codebases that are 1) old, 2) large, and 3) supported by many people.

Because Google[1] and Facebook came along and scared everyone by iterating at a LOLWTF pace and companies in surrounding spaces looked at what was taking up time in their release schedules and the answer was "test passes" so they fired all their testers, and told devs to add unit tests but unit tests don't cut it.

Companies that used to have immaculate software quality had dedicated test automation engineers who had the job of abusing software in crazy bizarre ways. Then they hired armies of manual testers to go over anything that hadn't been automated.

Lots of problems existing with this system, one of which was career advancement for software engineers in test was limited because it is hard to get recognized for the two primary jobs of an SDET:

Signing off on code

Blocking a release on quality grounds

So you had a gradual rot of SDET and test orgs at companies, with pools of brilliance that slowly got drained as the best engineers got tired of being undervalued.

Start from that base, and then around ~2010 everything needs to start "moving fast".

Apple and MS both get rid of their test teams, and with two of the largest employers of dedicated software engineers in test getting out of the field, the entire field itself falls apart. Now it is career suicide, an ever shrinking career path that pays far less than doing "real" development work.

That leaves us at where we are today. Everything sucks and breaks all the time.

[1] Everyone forgets how bad the first 5 major versions of Android were.

mbrodersen|4 years ago

I have no QA testing and the code I maintain haven’t had a customer production bug for 5+ years. And we are taking complex software written in C++. The reason why is 90000 end-to-end use case tests. I routinely implement new features or refactor major parts of the application and the tests will tell me if it is ready for production or not.

wrs|4 years ago

Old large codebases are mostly maintained by people who weren’t around when the code was originally coming into existence. They don’t know the implicit design assumptions and decisions, or even the history of requirements. One thing you’ll find in nearly any software project of any age is a lack of good documentation of those things, so as you lose the community folklore, people will start making myopic changes, cargo-culting, violating future-looking design principles, and so forth. Pretty soon you just have a pile of incoherent features and making systematic improvements is hard because the code is no longer systematic.

mbesto|4 years ago

> 1) old, 2) large, and 3) supported by many people.

Software entropy implies that software that doesn't change will always suffer entropy if the environment changes. In a vacuum, we would never need to change software once it was "feature complete". But that's not how the world works. Environments change and so that inevitably means software will corrode.

Candidly, I think you're just looking at the world through rose tinted glasses.

peakaboo|4 years ago

You listed the reason it's a mess yourself - many people worked on it and it's old.

For big companies there is no incentive at all for a developer to personally care about and do battle over things like code quality and reduce complexity.

If they are use any kind of agile (lol) system, it will just be about polishing the turd so it doesn't break and add features in some way that doesn't require big rewrites.