I've recently jumped into Erlang, and have come to some of the same conclusions. I read Armstrong's Erlang book, and thought 'that seems cool' and didn't at all grok what was going on beyond the superficial. Then a few months later, just sat down and started solving Project Euler problems. At first it was strange and foreign and I was mad variables were immutable. A day later and it just clicked, I'd never had that much fun writing code.
The FP paradigm is (to me, even after 15 years of imperative programming) so much more natural for development, since you're able to sanely start attacking the problem directly -- instead of architecting a big-picture solution up front that's probably wrong anyway because you've ignored some detail you haven't yet discovered.
Building things with actions instead of objects just makes much more sense. Reuse comes much more naturally, and doesn't seem as contrived as a lot of OOP reuse seems. I've noticed that working my FP muscles out has made me a much better imperative programmer -- I write a lot more clever and effective code (not clever like 'tee-hee-no-one-will-ever-figure-this-out'!).
One thing that raises my eyebrows is how much functional programmers talk about Project Euler problems. The actual programming for solving these problems is in fact extremely trivial. They require some mathematical insight, especially after the first one hundred or so, and you need to do some external research on Pell's Equation to avoid getting stuck, and you need a fraction library if your language doesn't have it built in. But am I wrong in thinking that these kind of problems are almost no test or strain for your actual programming at all?
For me one of the real eye openers was that I'd gotten to the end of a fairly long book on FP without seeing a single assignment of a value to a variable.
Thank you for the reference to Project Euler. I'm learning Scala right now and it gave me just the right sized challenges for testing my knowledge. They're big enough to take time and thought, but small enough that you can solve them within the CLI interpreter.
I've used Erlang for a while, although sporadically. Honestly, I think I can do more with less code in Ruby, though. Erlang's great for the things it's good at, but brevity and clarity are really things I think are critical.
Here's a small example I bumped into the other day, providing a default value if a hash value isn't present:
foo[:bar] || "beebop"
vs
case (catch dict:fetch(bar, Foo)) of {'EXIT', _} ->
"beebop";
Val -> Val
end
If you really start studying FP literature you'll discover that an enormous amount of ideas that are in vogue in today's languages have been invented years ago by functional programmers. For example, this paper from 1965(!) describes DSLs: http://www.cs.cmu.edu/~crary/819-f09/Landin66.pdf
I think learning a language like Haskell can be extremely good for you as a programmer. The problem is that you just can't expect to be productive, if you're new to it, and that might be very frustrating if you're doing to do work. However, just as jacquesm writes, if you're doing it for fun you'll learn a lot (that you can sometimes apply directly to your normal programming).
There are about a thousand FP programmers to every imperative programmer, if not ten thousands. They use a tool you may have heard of, called "Excel", which is quite limited in scope.
It is an FP subset which is at the same time trivial to understand (much more so than imperative programming for most users!), and coupled with a usable I/O capabilities, is surprisingly sufficient for many uses.
(IDE, documentation, maintainability all suck, though; I wouldn't recommend it as your main FP tool if you can avoid it)
Yes! Excel formulas === FP! And as you say, there are orders of magnitude more Excel users than imperative programmers, so FP can't be as hard as is usually claimed, and certainly not as "unnatural".
(Before Excel 5 and VBA that Joel Spolsky claims he invented, macro language in Excel was also FP; it was a crazy but fascinating language in which I developed a whole billing application (in 1992...) I miss this language.)
it's funny, in French engineering schools we have the opposite reaction; in "prep" schools students are taught OCaml which is their first programming language if they're not geeks.
Functions returning functions seem a natural thing as they are used to the exact same kind of abstraction in math (and even sometimes order 3+ functions when you study duality!). Conversely, they are initially puzzled when they are taught Java in their engineering school because of the difference between static variables and attributes, constructors and other unnatural concepts.
It's great that more people are exposed to the functional programming style. Kudos to OP for trying something new.
It bugs me whenever FP people talk about state being bad as if it should be avoided at all cost. State is bad if its scope is not carefully managed. Global state is generally bad because its scope allows the whole program to modify it, making it difficult to follow its validity. Local state maintained as a local variable in a function is perfectly fine. Its scope is small and its changes can be tracked easily. Pure functional code actually also implicitly maintain state in their parameters and return values, and the passing of the return values as parameters to next function.
It's ultimately about making code easier to reason about. Immutability (at least as a default) makes the dataflow between independent portions of your program clearer, since every value is determined by where it came from, not where it came from and everything that could have potentially touched it along the way.
One thing you haven't mentioned but it is related to anonymous functions (or lambdas) and is an important part of the fp style is passing around functions as first class objects. It is quite unusual in imperative programming and it is normally only used there to provide callbacks.
Let's say, you want to write a function that exports your video library to an arbitrary medium. In FP, one way to do this is to create generator functions that create DVDs, Blu-Ray discs, etc. Then your export function would take the input and a generator function, and export the library using that function. In Common Lisp:
I'm going to do a completely separate post on the subject of first class functions because I think it is too complicated a subject (and with too many implications) to be squeezed in there as an aside, besides that I don't think I'm qualified just yet to write the article in a strong enough way yet. More understanding is required first.
It must be funny to all the FP gurus here to see someone struggling to understand the things that are second nature to them, but I find that it is surprisingly hard to teach this old dog a new trick. One part of me wants to say 'enough of this' all the time and reach for a C compiler just to get the job done :)
When you actually read this what do you do? You work inside-out to understand it. You figure out what 'data-set' is, then you figure out what '(lambda (x) ...) does to it, and so on.
You (or me at least) also write the code inside-out. You start with the data, and an idea of how to transform it, and you work your way towards that transformation.
This, in my opinion, is one of the greatest strengths that OOP has over FP: data.map(...).reduce(...). It doesn't really have anything to do with OOP, you could just as well have such a syntax for calling functions. In F# you do have the |> operator that does something like this.
This may seem superficial, but it helps readability a lot. The human mind (or mine at least) is just not well suited to unraveling nested structures.
I'm writing from a phone and lost a longer reply to expired link. In short:
You and the OP (cf. his assumption that you read functional code inside out) seem biased towards a chronological, bottom up reading of code.
An outside-in reading of prefix code is useful to get a top-down general grasp of the structure.
Both approaches are useful and complementary.
This duality also applies when writing code. Cf. "wishful thinking" in SICP: sometimes you want to assume away auxiliar or extraneous functionality to sketch an outline of your program.
In both reading and writing, alternating approaches helps to find the kernel of the problem quickly and to build a whole understanding at your pace, so you don't get bored or stuck.
To that effect, I find prefix notation more balanced, that is, easier to read both ways.
As a practical note, if you follow the 80 column convention long().call().chains() are space hogs and awful to break up. The equivalent problem in lisp is solved with a two-space indent.
But that's just an instance of the syntactic sugar debate. My point is not about notation but how you approach the code.
Stuart Halloway has a great explanation of anonymous functions in "Programming Clojure." He identifies the specific conditions where you might choose to use an anonymous function given that, for readability reasons, naming functions is usually a good idea.
I tried wrapping my head around Clojure but I just couldn't. I'm currently diving into Scala and finding it much easier to get into coming from an OOP background (I'm a Java developer by day and a Rails/Objective-C developer by night).
From the article: "You have to train yourself to start the understanding of code you're looking at from the innermost expressions, which are the first to be evaluated."
The author is in for some even more mind bending, when he eventually has a look at lazy languages.
Lazy evaluation: evaluate an expression only when you are actually going to do something with the result. For instance, in a generator you can generate 'infinitely long lists' because only those values that are consumed are actually generated.
Is that what you mean ?
Or do you mean when that principle is expanded to the whole language ?
Welcome to the party, Jacques! I think you and I started about the same time.
FP seems much richer than IP, but maybe that's just me. I know that once you get all the basic set operations, then you move on to continuations and Monads and it's like wow! A whole other world opens up. Then you can move on to all sorts of other cool stuff like super-scaling which kind of just "falls out" of FP. So it seems like there is more depth here for geekiness.
As far as bugs, I guess that the vast majority of bugs are related to either "state leakage" -- somebody tickling your variables while you're not looking -- or off-by-one errors. FP eliminates both of those. I know I try to stay as immutable as possible and my code feels a lot more solid than it used to.
[+] [-] sofuture|15 years ago|reply
The FP paradigm is (to me, even after 15 years of imperative programming) so much more natural for development, since you're able to sanely start attacking the problem directly -- instead of architecting a big-picture solution up front that's probably wrong anyway because you've ignored some detail you haven't yet discovered.
Building things with actions instead of objects just makes much more sense. Reuse comes much more naturally, and doesn't seem as contrived as a lot of OOP reuse seems. I've noticed that working my FP muscles out has made me a much better imperative programmer -- I write a lot more clever and effective code (not clever like 'tee-hee-no-one-will-ever-figure-this-out'!).
[+] [-] WilliamLP|15 years ago|reply
[+] [-] jacquesm|15 years ago|reply
[+] [-] Nycto|15 years ago|reply
[+] [-] davidw|15 years ago|reply
Here's a small example I bumped into the other day, providing a default value if a hash value isn't present:
vs (or something close, there might be a typo)[+] [-] chriseidhof|15 years ago|reply
I think learning a language like Haskell can be extremely good for you as a programmer. The problem is that you just can't expect to be productive, if you're new to it, and that might be very frustrating if you're doing to do work. However, just as jacquesm writes, if you're doing it for fun you'll learn a lot (that you can sometimes apply directly to your normal programming).
[+] [-] beagle3|15 years ago|reply
It is an FP subset which is at the same time trivial to understand (much more so than imperative programming for most users!), and coupled with a usable I/O capabilities, is surprisingly sufficient for many uses.
(IDE, documentation, maintainability all suck, though; I wouldn't recommend it as your main FP tool if you can avoid it)
[+] [-] ww520|15 years ago|reply
[+] [-] bambax|15 years ago|reply
(Before Excel 5 and VBA that Joel Spolsky claims he invented, macro language in Excel was also FP; it was a crazy but fascinating language in which I developed a whole billing application (in 1992...) I miss this language.)
[+] [-] Hexstream|15 years ago|reply
To illustrate, Common Lisp directly supports functional programming but not dataflow programming (though it's possible to implement as a library).
[+] [-] bufo|15 years ago|reply
Functions returning functions seem a natural thing as they are used to the exact same kind of abstraction in math (and even sometimes order 3+ functions when you study duality!). Conversely, they are initially puzzled when they are taught Java in their engineering school because of the difference between static variables and attributes, constructors and other unnatural concepts.
[+] [-] groaner|15 years ago|reply
[+] [-] lolipop1|15 years ago|reply
[+] [-] eru|15 years ago|reply
[+] [-] Dn_Ab|15 years ago|reply
[+] [-] ww520|15 years ago|reply
It bugs me whenever FP people talk about state being bad as if it should be avoided at all cost. State is bad if its scope is not carefully managed. Global state is generally bad because its scope allows the whole program to modify it, making it difficult to follow its validity. Local state maintained as a local variable in a function is perfectly fine. Its scope is small and its changes can be tracked easily. Pure functional code actually also implicitly maintain state in their parameters and return values, and the passing of the return values as parameters to next function.
[+] [-] silentbicycle|15 years ago|reply
[+] [-] DrJokepu|15 years ago|reply
Let's say, you want to write a function that exports your video library to an arbitrary medium. In FP, one way to do this is to create generator functions that create DVDs, Blu-Ray discs, etc. Then your export function would take the input and a generator function, and export the library using that function. In Common Lisp:
And then you would Or if you want to use an inline lambda (anonymous) function:[+] [-] jacquesm|15 years ago|reply
It must be funny to all the FP gurus here to see someone struggling to understand the things that are second nature to them, but I find that it is surprisingly hard to teach this old dog a new trick. One part of me wants to say 'enough of this' all the time and reach for a C compiler just to get the job done :)
[+] [-] blintson|15 years ago|reply
Take: (reduce (lambda (x y) ...) (map (lambda (x) ...) data-set))
When you actually read this what do you do? You work inside-out to understand it. You figure out what 'data-set' is, then you figure out what '(lambda (x) ...) does to it, and so on.
You (or me at least) also write the code inside-out. You start with the data, and an idea of how to transform it, and you work your way towards that transformation.
Compare to:
((data-set (lambda (x)...) map) (lambda (x y) ...) reduce)
Of course, this brings up a lot of edge-cases. Ex.: Where does 'define' fit into this? You really want define and the variable name at the beginning.
[+] [-] jules|15 years ago|reply
This may seem superficial, but it helps readability a lot. The human mind (or mine at least) is just not well suited to unraveling nested structures.
[+] [-] euccastro|15 years ago|reply
You and the OP (cf. his assumption that you read functional code inside out) seem biased towards a chronological, bottom up reading of code.
An outside-in reading of prefix code is useful to get a top-down general grasp of the structure.
Both approaches are useful and complementary.
This duality also applies when writing code. Cf. "wishful thinking" in SICP: sometimes you want to assume away auxiliar or extraneous functionality to sketch an outline of your program.
In both reading and writing, alternating approaches helps to find the kernel of the problem quickly and to build a whole understanding at your pace, so you don't get bored or stuck.
To that effect, I find prefix notation more balanced, that is, easier to read both ways.
As a practical note, if you follow the 80 column convention long().call().chains() are space hogs and awful to break up. The equivalent problem in lisp is solved with a two-space indent.
But that's just an instance of the syntactic sugar debate. My point is not about notation but how you approach the code.
[+] [-] Goladus|15 years ago|reply
[+] [-] martingordon|15 years ago|reply
[+] [-] eru|15 years ago|reply
From the article: "You have to train yourself to start the understanding of code you're looking at from the innermost expressions, which are the first to be evaluated."
The author is in for some even more mind bending, when he eventually has a look at lazy languages.
[+] [-] jacquesm|15 years ago|reply
Is that what you mean ?
Or do you mean when that principle is expanded to the whole language ?
[+] [-] DanielBMarkham|15 years ago|reply
FP seems much richer than IP, but maybe that's just me. I know that once you get all the basic set operations, then you move on to continuations and Monads and it's like wow! A whole other world opens up. Then you can move on to all sorts of other cool stuff like super-scaling which kind of just "falls out" of FP. So it seems like there is more depth here for geekiness.
As far as bugs, I guess that the vast majority of bugs are related to either "state leakage" -- somebody tickling your variables while you're not looking -- or off-by-one errors. FP eliminates both of those. I know I try to stay as immutable as possible and my code feels a lot more solid than it used to.