Why is there so much useless and unreliable software?
83 points| learningstud | 3 years ago | reply
Instead of learning about those achievements and aiming to program for the same reliability, clarity, and sophistication, we see an abundance of software that cannot clearly describe their own behavior nor misbehavior.
Instead of incorporating the full functionality of XML/HTML/CSS/SVG/JS/WebGL into the development experience and providing ways to control them at the fundamental level, we reinvent crude approximations like the various web frameworks.
YAML and JSON often trumps XML/XSD until things get out of control, and even then, people still don't learn the lesson. Protobuf, flatbuffer, capnproto, and the like keep reinventing ASN.1.
Naive microservices partially reimplements Erlang's BEAM VM while ignoring all the hard parts that BEAM VM got right. Many people riding the microservice bandwagon have never even heard of Paxos, not to mention TLA+.
Many programmers keep learning new shining frameworks but are reluctant to learn about the crucial fundamentals, e.g., Introduction to Parallel Algorithms and Architectures, nor how to think clearly and unambiguously in the spirit of Coq/Agda/Lean.
No wonder ChatGPT exposes how shallow most of programming is and how lacking most programmers are in actual understanding. Linear logic and dependent types are there to help us design and think with clarity at a high level, but people would rather fumble around with OOP class hierarchies (participate in the pointless is-a/has-a arguments) and "architecture" design that only complicate things.
What is this madness? This doesn't sound like engineering.
[+] [-] pjc50|3 years ago|reply
https://twitter.com/maxkreminski/status/887815522061926400
"a reminder: if inexperienced creators are using your tool to churn out loads of half-baked garbage, your tool is a phenomenal success"
Software is such a powerful concept - it basically imbues physical objects with magic - that even bad software is hugely empowering to its users and takes off very quickly. The demand is staggering.
I appreciate that when you see yourself as the most intelligent person in the world it becomes intolerable to be surrounded by unthinking muggles scratching in the dirt, but after a while you realise that life is more complicated than that, people usually have good reasons for doing the things they do, and that perfection is neither attainable nor necessary for most of that.
[+] [-] zelphirkalt|3 years ago|reply
Most tech choices are made on the basis of incomplete information and incomplete planning or even no planning ahead. Many are merely following hype, instead of truly looking at the options and making a wise choice. Hype creates awareness of products or software, which ultimately reaches the uninformed masses. People not in the business of making software themselves have usually vastly less information to base their choice on and often make questionable choices. This in turn generates more incentive to continue making software like the one they chose. This is what amounts to the quote you posted.
[+] [-] learningstud|3 years ago|reply
[+] [-] Gigachad|3 years ago|reply
[+] [-] s1k3s|3 years ago|reply
This also affects our people (usually our top engineers) - which is why I want to start developing our own products this year.
[+] [-] shinycode|3 years ago|reply
[+] [-] patrickk|3 years ago|reply
Cover your ass, ticking off items as launched are other areas that lead to speed over quality every time.
The OP post reads as very naive, without experience in the real world corporate politics. No one really gives a shit about most things on that list (unfortunately).
[+] [-] anonzzzies|3 years ago|reply
[+] [-] xupybd|3 years ago|reply
[+] [-] StreamBright|3 years ago|reply
- please put this broken crap of 15 services written in 6 different programming languages on a 50 node cluster so we can read and write data with 300 kbit/s.
2 years later:
- could you do something about this cluster, it costs us too much.
[+] [-] smilespray|3 years ago|reply
[+] [-] davidktr|3 years ago|reply
[+] [-] flemhans|3 years ago|reply
[+] [-] fer|3 years ago|reply
When you're shooting a multi-hundred million satellite into orbit it's worth the extra few million expense of formal verification because otherwise you might lose a gigantic investment and even kill people in the process.
When you're working on something without such extreme constraints, with vague SLAs, and limited to no business risk, then regular unit/integration/etc tests are good enough and exceedingly cheaper.
[+] [-] pjc50|3 years ago|reply
And even then the cost effects of that are so prohibitive that SpaceX transformed the industry with their "we cannot guarantee the hoverslam landing will work first time" approach.
Producing something imperfect quickly and then iterating beats upfront planning by such a large margin so often that it's not funny.
[+] [-] LAC-Tech|3 years ago|reply
People can give themselves all the fancy titles they want; I don't think 99% of software development is anything approaching Engineering.
(This isn't so much an attack on my fellow programmers as a recognition that this field is very young and still very immature.)
[+] [-] stared|3 years ago|reply
> This doesn't sound like engineering.
It is precisely engineering. As opposed to pure science and art. (I consider mathematics to be, above all, art.)
Based on the post itself, you come with a theoretical mindset. You may consider purity to be more important than practical applications. Yet, if people write software to be used, they focus on the latter. Sometimes it results in hacky code even within an already hacky language. There are no extra points for purity.
Purity itself is a double-edged sword. Sometimes it makes the code more reliable. Other times - it generates a lot of abstract nonsense, which makes it harder to reason about the piece of software or change it.
On the positive side, look at the Rust language (and community!). While it has lovely abstractions and safety guarantees, it is a practical language - performant for writing and execution.
[+] [-] quickthrower2|3 years ago|reply
[+] [-] rep_movsd|3 years ago|reply
[+] [-] davidktr|3 years ago|reply
Is XML better than JSON? For longterm stability sure. A quick config file? Nope. You see, to understand XML you kind of have to know how to work with trees. Bread-and-butter for compsci grads, a nuisance for all others.
Recently I needed to query a SOAP-based API. It took me 3 days because I had no idea how a namespace in XML worked, because I was not sure which lib in Python to use (lxml was the solution), and because their API had some quirks. I read forum posts from the late 00's, the documentation is really scarce.
I would love to learn the fundamentals, I don't like half-assing things. But a) I'm not going back to uni for it, b) I'm not bright enough to be excellent in two fields. Learning the fundamentals takes a very, very long time. Where to start even? A complete math program? Programming fundamentals as such? Theoretical compsci?
At the end of the day, things need to work. If they fall apart next year I can tell my boss "ooohhh big problem give money". Using XML instead of JSON has no payoff today.
[+] [-] Sankozi|3 years ago|reply
I have an opposite opinion. JSON is great solution for a protocol or serialization format, better than XML (JSON is faster, smaller, safer, easier to understand) But it is one of the worst possible choices for config file format. Even XML is better in that case. Lack of comments is complete showstopper in any source code file syntax.
[+] [-] StreamBright|3 years ago|reply
[+] [-] jaredsohn|3 years ago|reply
Don't get that.
>to understand XML you kind of have to know how to work with trees.
JSON is trees, too.
[+] [-] solus_factor|3 years ago|reply
Seems like there's not that much demand for what you would like to see.
Crappy, barely working, software now is better then 1 year delayed perfect one.
[+] [-] seren|3 years ago|reply
However it probably can hurt a brand reputation in the long run if you have a quality level below customer expectation.
[+] [-] rTX5CMRXIfFG|3 years ago|reply
It seems to me that OP places a high level of usefulness or value of software at lower levels of abstraction, and I don’t disagree—it’s akin to how any manufacturing business is tremendously more valuable than the variety of consumer-facing products that can be made from it—but the cost of entry into such a business domain, and actually succeeding to make a profit, is often high.
[+] [-] TrackerFF|3 years ago|reply
In the world of software - especially for startups - it is completely unacceptable to spend 5 years on building a MVP. There are some protected sectors where things move slower (defense, aerospace, for example), but if you're in the consumer field, you just can't spend too much time. You want to push out a MVP as soon as possible, and build on that.
And most software companies do not get penalized on shipping buggy software. Big game studios ship broken games, and spend a couple of years patching them up to their final form. People bitch and moan, but still throw money at 'em.
If you don't want bloated, broken, slow, and unreliable software - use your pocketbook.
[+] [-] mikewarot|3 years ago|reply
That was the seed of the madness.
Instead of supplying dialog boxes(open, save, etc) for applications to use and then open files directly, the OS could have supplied handles (capabilities) from those dialog boxes for the applications to use. This would have allowed the user interface to be almost identical, and only require a few lines of code in the applications to change, in exchange for an environment which was almost immune to confused/rogue programs.
Because expectations were so corrupted in the desktop days, the situation now is effectively hopeless.
Applications should NEVER be trusted, especially not by the operating system.
[+] [-] pjc50|3 years ago|reply
> Applications should NEVER be trusted, especially not by the operating system.
This was basically impossible to do in the early personal computer era, before memory protection was a thing. So MacOS, Windows pre-NT, AmigaOS etc were all built around the assumption of applications reading each other's memory.
[+] [-] Someone|3 years ago|reply
Not hopeless. MacOS, for example, added entitlements that get you there. Applications cannot list random directories or open files at will.
If an application opens a file browser, the operating system shows you your file system. When you pick a file, the operating system gives the application the right to read that file.
See https://developer.apple.com/documentation/bundleresources/en...
[+] [-] michaelteter|3 years ago|reply
- no engineering standards in computer science, including especially in education of computer science
- some corporate finance views which see technology as a cost center rather than a business enabler
- some corporate strategies where marketing decides what is possible and when it must be ready (tomorrow. or yesterday.)
- computing and software development as a "fun", creative endeavor - as opposed to a rigorous, formal process
I'm sure there are dozens more reasons.
[+] [-] mathverse|3 years ago|reply
[+] [-] valenterry|3 years ago|reply
[+] [-] ChrisMarshallNY|3 years ago|reply
As long as people are willing to pay for dross, people will continue to produce it. Just look at Hollywood, for a classic example.
For myself, I found a company that was interested in producing Quality, and spent most of my career, there. It had its trade-offs.
It certainly didn’t get me much respect from today’s software development community. If anything, it gets me scorn and outright disrespect.
I had to retire and work on my own (for free), in order to finally get to write software the way that I want. I doubt it would be considered commercially viable.
In any case, “doing it right” is in the eye of the beholder. The fact that I'm not writing all my native Apple software in Haskell, is "doing it wrong," to some folks. Don't even get me started on OOP or using UIKit/AppKit.
Cant' please everyone.
I’m best off, putting my values into practice, in my own context, and not being an old man, yelling at the sky.
But I really want to. It just won’t do any good.
[+] [-] silvestrov|3 years ago|reply
It is also "good enough" for most use cases. So it is reasonable to start out with JSON and only upgrade to XML for complex documents/strutures.
[+] [-] Zanfa|3 years ago|reply
[+] [-] khalidx|3 years ago|reply
If we could abstract away the storage so that it is user owned, or at least a "utility" like electrical power lines where everyone is playing by the same rules, we would be able to trust these platforms significantly more, and rest assured our data is safe even if the software isn't.
[+] [-] Joel_Mckay|3 years ago|reply
( Melvin E. Conway, https://en.wikipedia.org/wiki/Conway%27s_law )
Thus, we may conclude the world has some people that choose to be "useless and unreliable". The software part is just an artifact of this ideology. =)
[+] [-] pjc50|3 years ago|reply
This is the original Amazon memo and the rationale for microservices.
The bigger picture is that if you look at the totality of the software and hardware system, from app to browser to OS to hardware, you see that it replicates the company and market structure that produces it. Which is partly why the web is so chaotic. The web is a standards battleground over which companies fight.
[+] [-] ss48|3 years ago|reply
Transferring ideas and software discovery are difficult problems. Expecting everyone to know that tech x is superior to tech y and singularly pursue development of tech x is unreasonable.
Most people prefer work with a solution that you are familiar with or caters to a particular technology stack. JSON is a first-class citizen of Javascript, but can be secondary in other programming languages. Anything that provides the least cost or effort to resolving or minimizing the extent of the product at hand is all that is needed. There is almost never a case where point B must follow or supersede point A. Point A continues to solve the problems it was designed for or turned out to be useful for by chance: the technology can continue to be used where it is known and regarded.
[+] [-] GianFabien|3 years ago|reply
I find that the learning curve to get on top of most abstractions is greater than what is required to understand the foundations. Knowing and applying foundational components tends to yield more performant and less bloated solutions. Debugging is easier because you don't need to unravel layers of abstractions which don't quite align with the domain you are working with.
[+] [-] ricardobayes|3 years ago|reply
[+] [-] porcoda|3 years ago|reply
I’m hoping such pressure arrives in the form of legal or regulatory stuff, even if it chills the industry and slows/shrinks it. Until then we’re still in the computing equivalent of the auto industry when people died in mild fender benders that today are just mild annoyances.