(no title)
niklasni1 | 10 years ago
Another example is instead of shuffling a bunch of bare UUID objects around (or strings, for that matter) in a system where lots of different things have a UUID, I can make a simple reference type for the IDs of different entities. This way, calling a function that takes the UUID of one type of entity with that of another can be a type error that's caught by the compiler instead of a logic error that's caught by a unit test. This is cumbersome at best to do in Java or C, and obviously impossible in Python or Ruby, but in ML-inspired languages it's simply the most natural way to work.
davelnewton|10 years ago
fghfghgfhfg|10 years ago
It might add some additional complexity in the first writing of the code, but in return you eliminate whole classes of problems forever. It's not just the initial writing that benefits (at some cost, admittedly), but all future changes won't have those problems either. In the case the types themselves need to change to account for expanded functionality or whatever, you again pay some cost in complexity, but in return every place the new type would cause problems you get a nice error.
The other thing to keep in mind is that whether or not these type are codified in the language they're there conceptually. Just because you have to write them down doesn't necessarily add complexity but is more like forced documentation that can be used to eliminate who classes of problems. It's difficult to see how this wouldn't intrinsically be better than so-called dynamic typing.
jeremyjh|10 years ago