The idea of managing side effects as a type (monads) still hasn't seemed compelling to me in terms of development time. Here I agree with Liskov in saying that it's a bit over the top[0]. Most of the quality I see with Haskell has to do with strict types, not the handling of I/O errors as types themselves.
Not that I want to bash it.. Learning haskell has actively changed how I approach all my C/C++ development, and gotten me far far far into the weeds of now learning Agda as a prototyping language for designs/semantics[1].
The world may or may not need Haskell. However it's certainly a better place now that it has it.
[0]: She said this at a talk she gave at work.. It was similar or the same as her "The Power of Abstraction" talk, dunno if she makes the same comment in every presentation though.
The big win of "managing side effects as a type" is not that effectful code gets an IO type but that everything else does not. Thus the lack of IO in a function's type assures you, reliably, that the function does not cause side effects or depend upon the state of the outside universe. This lets you corral effectful code and keep the bulk of your code "pure" and easy to reuse and reason about.
Monads don't really have much to do with IO. It's up in the air whether IO really even is a monad. See Conal Elliot's answer on SO and the link he provides: http://stackoverflow.com/a/16444789/65799
I don't agree that managing effects is over the top per se however the use of monads feels over the top for pretty much everything!
It's always struck me as strange that value (as in a typical type system) and effects would be controlled through the same system. The type signature of a function and it's effects seem very much orthogonal to me.
If we want to be controlling side effects then we really ought to be using a separate effect system[1]. There's a scala plugin demonstrating this (although I've not tried it)[2].
With separate type and effect systems I should be able to define a pure function fib(n) and call it like this fib(getValueFromUser()) that is without having to use special operators to get at the value which can only be used in certain contexts a la Haskell.
I'm having fantastic success with a custom IO-like monad that lets me statically verify that my unit test suite is totally side effect free.
I don't have to resort to documentation or code reviews to ensure that my teammates write fast, reliable tests. This isn't possible in a language that doesn't restrict side effects.
I don't have much experience with haskell beyond Learn You A Haskell (a book), but I wonder how beneficial the no null pointer errors are in practice.
The way haskell handles this reminds me of checked exceptions in Java. In java if you read from a file, your code won't compile unless you have a catch block that handles the possibility of an IO exception. This is called a checked exception because you have to check for the possibility or else your code won't compile.
I know many Java programmers handle checked exceptions by wrapping the checked exception in a unchecked exception so they don't have to deal with it. Don't Haskell developers end up doing the same thing with their Maybe concept?
Haskell avoids null by using a Maybe type/class (I always forget the terminology). A Maybe can evaluate to either a value or a type that represents the absence of a value. (This is an oversimplification for the consideration of people who know nothing about Haskell)
For example, you've got an associative map data type and you find an element in there. At the time of writing the code you "knew" that the element "has to be there". Haskell makes you deal with the possibility that it's not. Won't most developers just end up throwing an exception in that case so they don't have to deal with the impossible possibility? Then, x months from now when the code gets changed so that the map won't have the element there, all the sudden your code gets an error. How is this different from a null pointer exception in any other language?
(Part of me is ignorant and part of me is playing the devil's advocate.)
This really depends. The usual rule of thumb is that a runtime exception (at least in pure code) is for bugs in the code and explicit methods like Maybe are for bugs in the input. So if getting a Nothing from your map means your own code is broken, an exception is perfectly fine; if it means your code was called wrong, you want to just return a Maybe.
In practice, most of the time, you end up either coming up with a default or returning the Maybe. This is greatly helped by the fact that just propagating Maybe value is really easy because Maybe is a member of a bunch of standard typeclasses like Applicative, Alternative and Monad. Thanks to this, I've found most of my code follows a simple pattern: it pushes the maybe values through until it has a meaningful default. This is safe, simple and fairly elegant. At some point, I either pattern match against it or use a function like fromMaybe, which then allows me to plug everything back into normal code.
Gennerally when you have null pointer erros, it is because the developer did not realize that there was the possibility of having a null value at that point in code. Adding this information to the type system allows the developer to know exactly where the issue might occur (and have the compiler complain if it is not handled). Also, in terms of use, maybe has another major advantage over null. Many times in languages with null you see the pattern
if (foo==null) {return null} else {...}
This pattern is handled automaticly by maybe; If you try applying a function to Nothing (maybe's version of null), then the result of that function is also Nothing, even if the function itself was not designed to handle Maybes.
Additionally, as a matter of culture, Haskell programmers rarely throw exceptions.
Working with a Maybe isn't that much better than working with a type that might be null in Java, C#, or C++. You do have to go slightly farther out of your way to be "bad" and ignore the Nothing case, but not much -- you could wrap all your Maybe values with fromJust and turn them into exceptions as you suggest.
The real benefit is the ability to have things that aren't Maybe. I would say that in any language, most functions don't have meaningful answers for null inputs (especially when you count the 'this' object input in object oriented languages). Most functions also don't produce nullable results. Yet, in C# or Java, reference types could always be null. The cases where that nullability is a real possibility that you need to handle are difficult to identify.
Having nullability be opt-in with Maybe, rather than an inescapable part of all reference types, lets you leave worrying about handling the null case to the typically smaller amount of times where it can actually meaningfully occur. Hopefully that makes doing the handling correctly more likely.
I think that in the particular case of an associative map, you are generally correct: there is no gain.
However, in other situations the Haskell approach is much better. Say you have a Java class with five fields, three of them are nullable (i.e. have a sensible notion of being null) and two are not (have to be non-null unless there is a programming error). There are tons of places where you can mess up: assign null to an invalid variable, assign "x = f()" not noticing f can return null, call "y.f()" forgetting that y can be null etc. However, if your type system handles nullability, you will get a compile-time error on each of them. More: if you change nullability of a field, compiler errors will point out places that have to be changed.
By throwing an exception when you "know" an element in the hashmap has to be there, you resign from safety offered by the compiler. In Haskell this is a code smell; Haskellers are less content with such hacks and might try to redesign the structure so that the invariants are enforced by the type system.
The important bit isn't the Maybe-types, but all types that are _not_ Maybe.
If types aren't non-nullable by default then any function parameter or return value can potentially be null and a robust program practically needs to have null checks _everywhere_. Otherwise, when your program throws a NullPointerException you'll have no way to know where that null originated from.
In Haskell, the types guarantee that you never ever have to check for null (in fact, you can't even) unless the function's type explicitly allows for the possibility of having a null. In addition, you have several ways to compose function calls that might return null in such a way that you don't have to check for each null case separately.
A typical Haskell project will have thousands upon thousands of uses of Maybe, and only a handful of calls to the unsafe "unjust". Sure, sometime the maybe wrapper is wrong and you have to force it but it is exceedingly rare and in the vast majority of cases you have maybe you need to actually handle the nothing case. The compiler makes sure you do. It also makes sure you remember not to give nothing to a function that doesn't expect it. It also reminds you to think really hard if you unsafe fromJust.
Without it you can easily hand null to anyone who doesn't expect it, and forget to think about null when receiving it.
a.) Haskell gives you tools to deal with it. Unlike checked exceptions which must be restated and dealt with in every function along the chain - Haskell provides powerful abstraction capabilities to manage this. (You can use maybe values while most of the functions know nothing about them)
b.) This is a tool that can help you if you don't shoot yourself in the foot. The compiler can provide guarantees that no one "just throws an exception" by throwing warnings/errors on partial functions and or the use of unsafePerformIO. These are the only two escape hatches that would allow you to do such a thing but they are both heavily frowned upon and detectable by the compiler.
> For example, you've got an associative map data type and you find an element in there. At the time of writing the code you "knew" that the element "has to be there". Haskell makes you deal with the possibility that it's not. Won't most developers just end up throwing an exception in that case so they don't have to deal with the impossible possibility? Then, x months from now when the code gets changed so that the map won't have the element there, all the sudden your code gets an error. How is this different from a null pointer exception in any other language?
I don't understand. You're saying that there is a mismatch between what the programmer knows and what the compiler can deduce? That you know, from the state the program must be in, that the map has to contain the value associated with the given key? I can't think of an example were that would be the case off the top of my head.
I guess a similar case is if you have to perform the head function (take first element) on a list, and you know that the list is non-empty. If you weren't sure, then if you don't check that the list is non-empty before taking head on the list, you might get an error at runtime from taking head on an empty list, which is undefined in Haskell, but in a dependently typed language is impossible to even do at runtime (much like null pointer exceptions are impossible to get in Haskell).
Why would you throw an exception in Java? If you're sure that a value will be returned (as opposed to a sentinel value... aka null pointer), then just happily dereference it.
You seem to be coming at this from a weird angle. The Maybe type let's you statically mark all things that might be "null", which is a big improvement from the Wild Wild West of everthing might be null. Now you seem to be coming at this from the "how does this make dealing with the semantics of absent values easier?", to which I guess, no, it doesn't. You still have to mull out what you should do if things are missing, or if they deviate from the normal. But you can throw exceptions, define things that you don't want to deal with or that truly are undefined, as simply "undefined", much like in most other languages.
While I agree that Haskell is a great language and everyone should learn it because it's fun to play with it, I think solution to author's problems (and as a current evolution step) would be something like Kotlin.
It's Java-like language (no need to manage pointers), and it doesn't have null-pointer problem, but it's still high-level imperative OOP language.
While many people say that they're code in Haskell is easier to read, since it has this "side-effect-free" guarantees, for me it seemed as not true for some time recently. In Haskell, when your code gets complicated (and starts to have some patterns you want to omit in typing), you start writing Monads. And when you start writing monads, your code gets harder to read since you need not only consider the code, but also keep (>>=) operator (all of them, if you use multiple monads combined via transformers) in your head for every pair of lines. Your code can suddenly have something like global variables (dynamic scoping) hidden in monad (as with State monad), it's flow can be changed dramatically and different other surprises.
> It’s also why ruthless testing and 100% test coverage have become so important in mainstream languages. But even with 100% test coverage you can’t be sure that your code will work correctly if a function unexpectedly returns nil unless you’re also mocking things out or using fuzz testing.
You can write tests with 100% coverage that don't use a single assertion. Test coverage is a completely useless metric, but it is an easy one to measure and understand, which is why it is so popular in pseudo-QA and the management tier.
I frequently see people talking about an unqualified "100% test coverage". Is it safe to assume in these cases that the author is referring to statement coverage[1] rather than something more stringent like decision coverage or even MC/DC?[2]
If we're talking about statement coverage then I agree that achieving 100% statement coverage then calling it a day really isn't as helpful as it might sound.
Can you list those same features that it provides? My understanding of Scala's philosophy is it provides as many features as it can. A Maybe type, immutable state, separation of IO, etc. isn't very useful if my coworker can choose not to use it.
Scala doesn't really provide much of what is mentioned in the article.
It still has nullable types, side-effects aren't tracked in the type-system (so any function can potentially do anything) and the presence of sub-typing has several unpleasant consequences, for example: lack of full type-inference, generally more complex type signatures than in Haskell and having to deal with covariance vs contravariance.
It strikes me that 'puzzle languages' are just languages with non-mainstream semantics. If we lived in an alternate universe where the dominant paradigm was entirely pure, then those odd languages where any bit of code can change any part of the state of the program would be puzzle languages.
Hague defines puzzle languages by referencing the experience of realizing you're going down the wrong path and having to completely restructure your code; I have had this experience in Python and Java in the past, and conversely, I program in Haskell more or less daily and therefore almost never have that experience there any longer.
Personally I find Python, Ruby, Lua, C to be "puzzle" languages. The puzzle is how to manipulate the primatives like arrays, pointers, dictionaries, etc. to express the natural, higher-level concepts that you really want, like map, fold, etc.
Programmers in my previous job would feel pleased with themselves if they'd solved the "puzzle" of an array processing loop which was a simple recursive function in disguise.
Haskell certainly has a big learning curve, but the experience will be much more rewarding than learning Clojure or F#. Haskell is a (relatively) uncompromising language and Clojure and F# are really just trying to take some of the features of Haskell and bring them into the mainstream. So if you learn F# or Clojure it may help you if you ever try to learn Haskell, but if you learn Haskell, F# and Clojure will be trivial to learn.
You could just as easily say that Python (one of the languages listed as non-puzzle) is a puzzle language because it lacks goto, so you have to puzzle out ways to model your control flow and looping into the more rigid while-loops, for-loops and if-else-branches. "I want to execute this block at least one time, but not necessarily the second, and I have to check these conditions here, but I have to declare the boolean variable over here... Oh my kingdom for a goto!"
Right, but how does that give me any assurance that the value contained within the optional is itself not null? Haskell gives me that assurance.
If all pointers can be null, then every pointer type is an implied optional type. Haskell's advantage is that it allows us to define types which cannot be null.
I skimmed so perhaps I missed something, but it looks like this is just the usual argument in favor of functional programming. I don't find it convincing because there's more to programming than getting the right output. Performance and memory usage are also important, and Haskell is extremely opaque in that respect. A strict language with optional laziness makes it much easier to reason about performance.
Haskell zealots are using the same nonsensical arguments that Java used, claiming to be a "safe" language where the compiler and static typing "eliminating common bugs". This is a naive meme, almost everyone telling to each other.
First of all - no static typing system, whatever sophisticated it is, could protect from incompetence, lack of knowledge of underlying principles and plain stupidity. The dream that idiots will write a decent code will never come true, no matter how clever tools could be, just because it is impossible to write any respectable code without understanding hows and whys.
But, for those who managed to understand the core ideas and concepts on which programming languages were based (immutability, passing by reference, properties of essential data-structures, such as lists and hash tables) it is possible to write sane and reasonable code even in C, leave alone CL or Erlang, and the type system will become a burden rather than advantage.
So, Haskell is really good to master Functional Programming (which is much better to learn using old classic Scheme courses), to understand the ideas it rests on. To realize what is function composition, environments, currying, partial evaluation, closures, why they and when they are useful and handy, and how clear and concise everything (syntax and semantics) could be if you just stop here - just skip the part about Monads - they are just over hyped fanciness to show off.
Learning Haskell after Scheme/CL really clarifies one's mind with realizations how the same foundation ideas work in a alien (static-typing) world, and how, everything is clean and concise, until you're starting messing everything up with "too advanced typing".
Again, it is much better to learn the underlying ideas (why it is good to separate and pay special attention to functions that performing IO, what is recursive data structures and why null-pointers do exist in the first place) instead of stupid memes like "monads are cool" or "Haskell prevents bugs".
The trick is that it is that dynamic languages with proper testing (writing tests before code) is not worse than this "static typing safety", and that the very word "safety" is just a meme.
[+] [-] codemac|12 years ago|reply
The idea of managing side effects as a type (monads) still hasn't seemed compelling to me in terms of development time. Here I agree with Liskov in saying that it's a bit over the top[0]. Most of the quality I see with Haskell has to do with strict types, not the handling of I/O errors as types themselves.
Not that I want to bash it.. Learning haskell has actively changed how I approach all my C/C++ development, and gotten me far far far into the weeds of now learning Agda as a prototyping language for designs/semantics[1].
The world may or may not need Haskell. However it's certainly a better place now that it has it.
[0]: She said this at a talk she gave at work.. It was similar or the same as her "The Power of Abstraction" talk, dunno if she makes the same comment in every presentation though.
[1]: http://www.youtube.com/watch?v=vy5C-mlUQ1w
[+] [-] tmoertel|12 years ago|reply
[+] [-] nilkn|12 years ago|reply
[+] [-] ghswa|12 years ago|reply
It's always struck me as strange that value (as in a typical type system) and effects would be controlled through the same system. The type signature of a function and it's effects seem very much orthogonal to me.
If we want to be controlling side effects then we really ought to be using a separate effect system[1]. There's a scala plugin demonstrating this (although I've not tried it)[2].
With separate type and effect systems I should be able to define a pure function fib(n) and call it like this fib(getValueFromUser()) that is without having to use special operators to get at the value which can only be used in certain contexts a la Haskell.
[1] http://en.wikipedia.org/wiki/Effect_system
[2] https://github.com/lrytz/efftp/wiki
[+] [-] implicit|12 years ago|reply
I don't have to resort to documentation or code reviews to ensure that my teammates write fast, reliable tests. This isn't possible in a language that doesn't restrict side effects.
[+] [-] unknown|12 years ago|reply
[deleted]
[+] [-] seanmcdirmid|12 years ago|reply
[+] [-] tieTYT|12 years ago|reply
The way haskell handles this reminds me of checked exceptions in Java. In java if you read from a file, your code won't compile unless you have a catch block that handles the possibility of an IO exception. This is called a checked exception because you have to check for the possibility or else your code won't compile.
I know many Java programmers handle checked exceptions by wrapping the checked exception in a unchecked exception so they don't have to deal with it. Don't Haskell developers end up doing the same thing with their Maybe concept?
Haskell avoids null by using a Maybe type/class (I always forget the terminology). A Maybe can evaluate to either a value or a type that represents the absence of a value. (This is an oversimplification for the consideration of people who know nothing about Haskell)
For example, you've got an associative map data type and you find an element in there. At the time of writing the code you "knew" that the element "has to be there". Haskell makes you deal with the possibility that it's not. Won't most developers just end up throwing an exception in that case so they don't have to deal with the impossible possibility? Then, x months from now when the code gets changed so that the map won't have the element there, all the sudden your code gets an error. How is this different from a null pointer exception in any other language?
(Part of me is ignorant and part of me is playing the devil's advocate.)
[+] [-] tikhonj|12 years ago|reply
In practice, most of the time, you end up either coming up with a default or returning the Maybe. This is greatly helped by the fact that just propagating Maybe value is really easy because Maybe is a member of a bunch of standard typeclasses like Applicative, Alternative and Monad. Thanks to this, I've found most of my code follows a simple pattern: it pushes the maybe values through until it has a meaningful default. This is safe, simple and fairly elegant. At some point, I either pattern match against it or use a function like fromMaybe, which then allows me to plug everything back into normal code.
[+] [-] gizmo686|12 years ago|reply
if (foo==null) {return null} else {...}
This pattern is handled automaticly by maybe; If you try applying a function to Nothing (maybe's version of null), then the result of that function is also Nothing, even if the function itself was not designed to handle Maybes.
Additionally, as a matter of culture, Haskell programmers rarely throw exceptions.
[+] [-] rspeele|12 years ago|reply
The real benefit is the ability to have things that aren't Maybe. I would say that in any language, most functions don't have meaningful answers for null inputs (especially when you count the 'this' object input in object oriented languages). Most functions also don't produce nullable results. Yet, in C# or Java, reference types could always be null. The cases where that nullability is a real possibility that you need to handle are difficult to identify.
Having nullability be opt-in with Maybe, rather than an inescapable part of all reference types, lets you leave worrying about handling the null case to the typically smaller amount of times where it can actually meaningfully occur. Hopefully that makes doing the handling correctly more likely.
[+] [-] sold|12 years ago|reply
However, in other situations the Haskell approach is much better. Say you have a Java class with five fields, three of them are nullable (i.e. have a sensible notion of being null) and two are not (have to be non-null unless there is a programming error). There are tons of places where you can mess up: assign null to an invalid variable, assign "x = f()" not noticing f can return null, call "y.f()" forgetting that y can be null etc. However, if your type system handles nullability, you will get a compile-time error on each of them. More: if you change nullability of a field, compiler errors will point out places that have to be changed.
By throwing an exception when you "know" an element in the hashmap has to be there, you resign from safety offered by the compiler. In Haskell this is a code smell; Haskellers are less content with such hacks and might try to redesign the structure so that the invariants are enforced by the type system.
[+] [-] shangaslammi|12 years ago|reply
If types aren't non-nullable by default then any function parameter or return value can potentially be null and a robust program practically needs to have null checks _everywhere_. Otherwise, when your program throws a NullPointerException you'll have no way to know where that null originated from.
In Haskell, the types guarantee that you never ever have to check for null (in fact, you can't even) unless the function's type explicitly allows for the possibility of having a null. In addition, you have several ways to compose function calls that might return null in such a way that you don't have to check for each null case separately.
[+] [-] Peaker|12 years ago|reply
Without it you can easily hand null to anyone who doesn't expect it, and forget to think about null when receiving it.
[+] [-] runT1ME|12 years ago|reply
[+] [-] nightski|12 years ago|reply
a.) Haskell gives you tools to deal with it. Unlike checked exceptions which must be restated and dealt with in every function along the chain - Haskell provides powerful abstraction capabilities to manage this. (You can use maybe values while most of the functions know nothing about them)
b.) This is a tool that can help you if you don't shoot yourself in the foot. The compiler can provide guarantees that no one "just throws an exception" by throwing warnings/errors on partial functions and or the use of unsafePerformIO. These are the only two escape hatches that would allow you to do such a thing but they are both heavily frowned upon and detectable by the compiler.
[+] [-] Dewie|12 years ago|reply
I don't understand. You're saying that there is a mismatch between what the programmer knows and what the compiler can deduce? That you know, from the state the program must be in, that the map has to contain the value associated with the given key? I can't think of an example were that would be the case off the top of my head.
I guess a similar case is if you have to perform the head function (take first element) on a list, and you know that the list is non-empty. If you weren't sure, then if you don't check that the list is non-empty before taking head on the list, you might get an error at runtime from taking head on an empty list, which is undefined in Haskell, but in a dependently typed language is impossible to even do at runtime (much like null pointer exceptions are impossible to get in Haskell).
Why would you throw an exception in Java? If you're sure that a value will be returned (as opposed to a sentinel value... aka null pointer), then just happily dereference it.
You seem to be coming at this from a weird angle. The Maybe type let's you statically mark all things that might be "null", which is a big improvement from the Wild Wild West of everthing might be null. Now you seem to be coming at this from the "how does this make dealing with the semantics of absent values easier?", to which I guess, no, it doesn't. You still have to mull out what you should do if things are missing, or if they deviate from the normal. But you can throw exceptions, define things that you don't want to deal with or that truly are undefined, as simply "undefined", much like in most other languages.
[+] [-] k_bx|12 years ago|reply
It's Java-like language (no need to manage pointers), and it doesn't have null-pointer problem, but it's still high-level imperative OOP language.
While many people say that they're code in Haskell is easier to read, since it has this "side-effect-free" guarantees, for me it seemed as not true for some time recently. In Haskell, when your code gets complicated (and starts to have some patterns you want to omit in typing), you start writing Monads. And when you start writing monads, your code gets harder to read since you need not only consider the code, but also keep (>>=) operator (all of them, if you use multiple monads combined via transformers) in your head for every pair of lines. Your code can suddenly have something like global variables (dynamic scoping) hidden in monad (as with State monad), it's flow can be changed dramatically and different other surprises.
[+] [-] mercurial|12 years ago|reply
That said, I agree that Haskell code is typically dense, and suffers from readability problems.
[+] [-] ExpiredLink|12 years ago|reply
[+] [-] happy_dino|12 years ago|reply
[+] [-] _pmf_|12 years ago|reply
You can write tests with 100% coverage that don't use a single assertion. Test coverage is a completely useless metric, but it is an easy one to measure and understand, which is why it is so popular in pseudo-QA and the management tier.
[+] [-] ghswa|12 years ago|reply
If we're talking about statement coverage then I agree that achieving 100% statement coverage then calling it a day really isn't as helpful as it might sound.
[1] https://en.wikipedia.org/wiki/Code_coverage#Basic_coverage_c...
[2] https://en.wikipedia.org/wiki/Code_coverage#Modified_conditi...
[+] [-] pspeter3|12 years ago|reply
[+] [-] tieTYT|12 years ago|reply
[+] [-] shangaslammi|12 years ago|reply
It still has nullable types, side-effects aren't tracked in the type-system (so any function can potentially do anything) and the presence of sub-typing has several unpleasant consequences, for example: lack of full type-inference, generally more complex type signatures than in Haskell and having to deal with covariance vs contravariance.
[+] [-] platz|12 years ago|reply
I intend to learn some Haskell but I keep wondering if it might be more useful to spend some time with Clojure or F#
[+] [-] andolanra|12 years ago|reply
Hague defines puzzle languages by referencing the experience of realizing you're going down the wrong path and having to completely restructure your code; I have had this experience in Python and Java in the past, and conversely, I program in Haskell more or less daily and therefore almost never have that experience there any longer.
[+] [-] tome|12 years ago|reply
Programmers in my previous job would feel pleased with themselves if they'd solved the "puzzle" of an array processing loop which was a simple recursive function in disguise.
[+] [-] callmecosmas|12 years ago|reply
[+] [-] Dewie|12 years ago|reply
[+] [-] spullara|12 years ago|reply
[+] [-] vinkelhake|12 years ago|reply
[+] [-] chongli|12 years ago|reply
If all pointers can be null, then every pointer type is an implied optional type. Haskell's advantage is that it allows us to define types which cannot be null.
[+] [-] Confusion|12 years ago|reply
[+] [-] skybrian|12 years ago|reply
[+] [-] TempleOS|12 years ago|reply
[deleted]
[+] [-] re_todd|12 years ago|reply
[+] [-] dschiptsov|12 years ago|reply
First of all - no static typing system, whatever sophisticated it is, could protect from incompetence, lack of knowledge of underlying principles and plain stupidity. The dream that idiots will write a decent code will never come true, no matter how clever tools could be, just because it is impossible to write any respectable code without understanding hows and whys.
But, for those who managed to understand the core ideas and concepts on which programming languages were based (immutability, passing by reference, properties of essential data-structures, such as lists and hash tables) it is possible to write sane and reasonable code even in C, leave alone CL or Erlang, and the type system will become a burden rather than advantage.
So, Haskell is really good to master Functional Programming (which is much better to learn using old classic Scheme courses), to understand the ideas it rests on. To realize what is function composition, environments, currying, partial evaluation, closures, why they and when they are useful and handy, and how clear and concise everything (syntax and semantics) could be if you just stop here - just skip the part about Monads - they are just over hyped fanciness to show off.
Learning Haskell after Scheme/CL really clarifies one's mind with realizations how the same foundation ideas work in a alien (static-typing) world, and how, everything is clean and concise, until you're starting messing everything up with "too advanced typing".
Again, it is much better to learn the underlying ideas (why it is good to separate and pay special attention to functions that performing IO, what is recursive data structures and why null-pointers do exist in the first place) instead of stupid memes like "monads are cool" or "Haskell prevents bugs".
The trick is that it is that dynamic languages with proper testing (writing tests before code) is not worse than this "static typing safety", and that the very word "safety" is just a meme.