top | item 31418139

(no title)

ThenAsNow | 3 years ago

  Anyone is allowed to prefer a programming style that suits their aesthetics and habits, and like that one over all others. Aesthetic preferences are a very valid way to choose your programming language — ultimately that's how we all pick our favourite languages — and there's no need to make up universal empirical claims to support our preferences.
That's fine, but I'm not sure what it has to do with my comment as it was not about preferences based on aesthetics or habits.

Thanks for the links though.

  There's really no need to assert what is really a conjecture, let alone one that's been examined and has not been verified.
There's no unsupported conjecture in "it is strictly more rigorous to catch equivalent bugs through the interpreter/compiler than through testing or other runtime-dependent approaches."

  If you believe the conjecture is intrinsically hard to verify, you're conceding that you're only claiming a small effect at best (big effects are typically not hard to verify), and so there's even less justification for continuing to assert it.
It's easy to fall victim to the Robert McNamara fallacy, that if something isn't easy to measure its effect or importance is insignificant. Anyone looking back at U.S. defense and procurement policy from his era is free to observe the lack of real-world congruence with such thinking. The Dan Luu page you cited, more than anything else, seems to reinforce that the cited studies are hard to interpret for any rigorous conclusions or for validity of methodology.

This is why I did not make sweeping statements along the lines of "the majority of dynamically-typed software in production [no qualifier on what "production" means] would have fewer bugs if it were statically-typed" or the like.

discuss

order

pron|3 years ago

> That's fine, but I'm not sure what it has to do with my comment as it was not about preferences based on aesthetics or habits.

Because you made the claim that "it is strictly more rigorous to catch equivalent bugs through the interpreter/compiler than through testing or other runtime-dependent approaches," but that claim was simply not found to be true.

> There's no unsupported conjecture in "it is strictly more rigorous to catch equivalent bugs through the interpreter/compiler than through testing or other runtime-dependent approaches."

There is, unless you define "more rigorous" in a tautological way. It does not seem to be the case that soundly enforcing constraints at compile time always leads to fewer bugs.

> It's easy to fall victim to the Robert McNamara fallacy, that if something isn't easy to measure its effect or importance is insignificant.

The statement, "you will have fewer bugs but won't be able to notice it," is unconvincing. For one, if you can't measure it, you can't keep asserting it. At best you can say you believe that to be the case. For another, we care about the effects we can see. If the effect doesn't have a noticeable impact, it doesn't really matter if it exists or not (and we haven't even been able to show that a large effect exists).

That the effect is small is still the likeliest explanation, but even if you have others, your conjecture is still conjecture until it is actually verified.

> The Dan Luu page you cited, more than anything else, seems to reinforce that the cited studies are hard to interpret for any rigorous conclusions or for validity of methodology.

It does support my main point that despite our attempts, we have not been able to show that types actually lead to significantly fewer bugs, i.e. that the approach is "more rigorous" in some useful sense.