(no title)
danwilsonthomas | 8 years ago
Please don't assume the motivations or intentions behind my actions and then attempt to use that to prove a point.
To address that point explicitly: In general, if requirements change then the code needs to change. Neither Clojure, nor Haskell will make that fundamental truth go away. You say you want to create less work for yourself when this happens. I'd like to propose that there is different types of work with differing qualities. When you receive a requirement change, the first step is to determine what that means for the system as it stands. This is by far the hardest step and is where the majority of the "hard" work lies. The next step is to change the system to fit the new requirement. This is primarily a mechanical task of following directions you laid out in the first step. I think it's preferable at this point to have some tool or process, even multiple, to make sure you do this part correctly. Tests, types, and code review are the three major tools I've seen aimed at this problem. Tests help ensure you've specified the changes correctly by forcing you to write them down precisely. Types help ensure you've implemented the changes correctly by forcing consistency throughout the system. Code review helps ensure you've implemented the changes in a manner that is understandable (maintainable) by the rest of your team. Note also that tests require an investment in writing and maintaining them, types (with global type inference) require no upfront investment but can require some maintenance, and code review requires participation of team members of similar skill level to you. They also all require some level of expertise to be effective. It seems shortsighted to throw away static types when they are, in my opinion, the cheapest method of helping correct implementation.
wellpast|8 years ago
I did a job for a defense contractor once that brought the entire team of 10 into a room to code review page by page a printed out copy of the latest written code. Now whether this was rational or not, I'm not sure, but I will give it to them that in mission critical systems you don't want to take any chances and that you're willing to be way less efficient to make sure you don't deploy any bugs.
I've been at a few companies where one of the engineers discovers code coverage tools and proposes that the team maintain 100% coverage of their code. These were mainstream industry software businesses that could afford some minor bugs here and there, so no one thought this was a good idea. Most engineers I know doing industry programming think that trying to sustain some specific percentage is a bad idea. Most say that you have to write your tests with specific justification and reason.
So no reasonable person I knows would advocate some overall mandate to do Code Reviews like above, or to hit some specific metric on Automated Tests is a good idea. And yet the general tendency of statically typed programming languages is to enforce the usage of types through and through. This should be really interesting, considering that Code Review and Automated Tests are far more likely to root out important bugs than a type checker.
I'm not arguing against type verification. I agree with you that it is one tool among many. It's the weaker tool among the three tools that you mentioned but we are still somehow arguing whether it should be (effectively) mandated by the programming language or not.
Why mandate type verification and not unit test coverage? Why not mandate code review coverage? Because, when mandated, they cost us more time for the benefit they bring.
My main argument is that type verification is a tool that should be used judiciously not blindly across the board.
danwilsonthomas|8 years ago
Not to bring up the tired argument again, but as far as I know there isn't proof for this. Nor is there proof that types are better at catching bugs than tests or code review. The best we can hope for is anecdata. I've got my fair share that shows types in a good light and I'm sure you've got your fair share that shows tests in a good light.
That being said, global type inference is in my opinion a fundamental game changer here. No longer are you required to put effort into your types up front. It is the only one of those three tools with that property. This makes it trivial to get "100% coverage" with types. That is why people argue for static type coverage.
Additionally, Haskell specifically has a switch to defer type errors to runtime, which means you can compile and test without 100% coverage.