(no title)
gspencley | 3 months ago
I would have agreed with that statement a few years ago.
But what I am seeing in the wild, is an ideological attachment to the belief that "immutability is always good, so always do that"
And what we're seeing is NOT a ton of bugs and defects that are caused by state mutation bugs. We're seeing customers walk away with millions of dollars because of massive performance degradation caused, in some part, by developers who are programming in a language that does not support native immutability but they're trying to shoe-horn it in because of a BELIEF that it will for sure, always cut down on the number of defects.
Everything is contextual. Everything is a trade-off in engineering. If you disagree with that, you are making an ideological statement, not a factual one.
Any civil engineer would talk to you about tolerances. Only programmers ever say something is "inherently 'right'" or "inherently 'wrong'" regardless of other situations.
If your data is telling you that the number one complaint of your customers is runtime performance, and a statistically significant number of your observed defects can be traced to trying to shoe-horn in a paradigm that the runtime does not support natively, then you've lost the argument about the benefits of immutability. In that context, immutability is demonstrably providing you with negative value and, by saying "we should make the runtime faster", you are hand-waiving to a degree that would and should get you fired by that company.
If you work in academia, or are a compiler engineer, then the context you are sitting in might make it completely appropriate to spend your time and resources talking about language theory and how to improve the runtime performance of the machine being programmed for.
In a different context, when you are a software engineer who is being paid to develop customer facing features, "just make the runtime faster" is not a viable option. Not something even worth talking about since you have no direct influence on that.
And the reason I brought this up, is because we're talking about JavaScript / TypeScript specifically.
In any other language, like Clojure, it's moot because immutability is baked in. But within JavaScript it is not "nice" to see people trying to shoe-horn that in. We can't, on the one hand, bitch and moan about how poorly websites all over the Internet are performing on our devices while also saying "JavaScript developers should do immutability MORE."
At my company, measurable performance degradation is considered a defect that would block a release. So you can't even say you're reducing defects through immutability if you can point to one single PR that causes a perf degradation by trying to do something in an immutable way.
So yeah, it's all trade offs. It comes down to what you are proritizing. Runtime performance or data integrity? Not all applications will value both equally.
iLemming|3 months ago
Still personally wouldn't call immutability a "trade-off", even in js context - for majority of kinds of apps, it's still a big win - I've seen that many times with Clojurescript which doesn't have native runtime - it eventually emits javascript. I love Clojure, but I honestly refuse to believe that it invariably emits higher performing js code compared to vanilla js with immutablejs on top.
For some kind of apps, yes, for sure, the performance is an ultimate priority. In my mind, that's a similar "trade-off" as using C or even assembly, because of required performance. It's undeniably important, yet these situations represent only a small fraction of overall use cases.
But sure, I agree with everything you say - Immutability is great in general, but not for every given case.