Define true as a lambda taking two lazy values that returns the first, and false as one that returns the second, and you can turn all booleans into lambdas with no increase in code clarity.
The straw man in the post - talking about a case-sensitive matcher that selectively called one of two different functions based on a boolean - is indeed trivially converted into calling a single function passed as an argument, but it's hard to say that it's an improvement. Now the knowledge of how the comparison is done is inlined at every call point, and if you want to change the mechanism of comparison (perhaps introduce locale sensitive comparison), you need to change a lot more code.
That's one of the downsides of over-abstraction and over-generalization: instead of a tool, a library gives you a box of kit components and you have to assemble the tool yourself. Sure, it might be more flexible, but sometimes you want just the tool, without needing to understand how it's put together. And a good tool for a single purpose is usually surprisingly better than a multi-tool gizmo. If you have a lot of need for different tools that have similar substructure, then compromises make more sense.
This is just another case of the tradeoff between abstraction and concreteness, and as usual, context, taste and the experience of the maintainers (i.e. go with what other people are most likely to be familiar with) matters more than any absolute dictum.
Someone else addressed the details of your counter argument, but I'd like to respond to it generally.
It seems like every time someone writes an article on how to write better code, there are responses about how it doesn't make sense when taken to some logical extreme, or some special case, as if that invalidates the argument. (FP techniques in particular seem to provoke this.) But code design is like other design disciplines-- good techniques aren't always absolutes.
Do you really think that because the given example doesn't apply to every situation it's a 'straw man'? It is a little tiring to hear all code design advice dismissed this way.
> Define true as a lambda taking two lazy values that returns the first, and false as one that returns the second, and you can turn all booleans into lambdas with no increase in code clarity.
This is trivially true, any datatype can be encoded as a function. The post is not saying that we can pass any type of lambda whatsoever, but that we should pass lambdas that implement the required functionality.
> The straw man in the post - talking about a case-sensitive matcher that selectively called one of two different functions based on a boolean - is indeed trivially converted into calling a single function passed as an argument, but it's hard to say that it's an improvement. Now the knowledge of how the comparison is done is inlined at every call point
If call sites shouldn't choose wich lambda (or boolean) to pass, simply define a new function that always passes the same lambda to the original function, and use it everywhere. (This could also be a good case for partial application.)
That's one of the downsides of over-abstraction and over-generalization: instead of a tool, a library gives you a box of kit components and you have to assemble the tool yourself.
...and a framework is likely to give you a box of components to build a tool-making factory factory factory...
A church encoded boolean is precisely isomorphic to every language's standard booleans (modulo strictness, perhaps) and doesn't offer any benefits; you're still forking the program based on the information content of a single bit.
Let's take the following function invocation, which can be expressed with Boolean literals or Church encoded booleans, I don't care:
match true false
If you want to determine the significance of the boolean values passed to this function, it does not suffice to go to the definition of 'true' or the definition of 'false'.
Now take something like this:
match caseInsensitive contains
Even though I have used descriptive names here, it's almost beside the point; I could just have easily have used nonsense names:
match foobar quux
If you want to know what 'foobar' means, you can go to its definition, and see how it preprocesses a string and a pattern. You don't have to guess about the meaning of a bit.
As a result, the semantics of 'match' and its parameters are all communicated more clearly, with less room for error, and much more generality.
There need not be any syntactic overhead: it is merely the replacement of some flag with a lambda which cleanly encapsulates the effect that would otherwise be encoded in the flag. The way you invoke the function is the same, but instead of twiddling bits to get what you want, you pass functions whose meaning does not require (as much) subjective and possibly error-prone interpretation.
Note this also objectively simplifies the functions themselves, because they formerly contained conditional logic, but once you rip that out and give them no choice (invert the control!), they have less room to err, which makes them easier to get right, easier to maintain, and easier to test.
There is also another way to view the issue: with booleans, we first encode our intentions into a data structure (at the caller site), and then we decode the data structure into intentions (at the callee site).
Well, why are we packing and unpacking our intentions into data structures? Why not just pass them through?
Indeed, we do that by pulling out the code and propagating it to the caller site (possibly with names so you don't need significantly different syntax and can benefit from reuse). Then our code more directly reflects our intentions, because we're not serializing them into and out of bits.
I think the general principle applies to more than booleans, but it's easiest to see with booleans.
A library can very easily provide, along with the kit components, convenience functions that perform common tasks - like matchCaseInsensitive or whatever. The point I took from the post is that, regardless of how the final public API is presented (and indeed, hopefully it doesn't involve piecing together umpteen bits), the code implementing it can be written by composing simple components rather than unwieldy conditionals.
Basically, if you structure the control flow in object oriented style (or church encoding...) then its easy to extend your program with new "classes" but if you want to add a new methods then you must go back and rewrite all your classes. On the other hand, if you use if-statements (or switch or pattern matching ...) then its hard to add new "classes" but its very easy to add new "methods".
I'm a bit disappointed that this isn't totally common knowledge by now. I think its because until recently pattern matching and algebraic data types (a more robust alternative to switch statements) were a niche functional programming feature and because "expression problem" is not a very catchy name.
Another alternative is "table-oriented programming", where you define the "classes" and "methods" as an m-by-n structure of code pointers; to add either "methods" or "classes", you would just add a new row/column to the table along with the appropriate code definitions.
and because "expression problem" is not a very catchy name.
It's also not particularly descriptive either, but the page mentions that it's a form of "cross-cutting concern", to which the table-oriented approach basically says "do not explicitly separate the concerns."
Problem is, a decision has to be made somewhere about which function to pass into that "if-free" block of code. The if-like decision has just moved elsewhere. That is a win if it reduces duplication: if a lambda can be decided upon and then used in several places, that's better than making the same Boolean decision in those several places.
Programs that are full of function indirection aren't necessarily easier to understand than ones which are full of boolean conditions and if.
The call graph is harder to trace. What does this call? Oh, it calls something passed in as an argument. Now you have to know what calls here if you want to know what is called from here.
One of the rules is: no function pointers. Rationale: Function pointers, similarly, can seriously restrict the types of
checks that can be performed by static analyzers and should only be used if there is a
strong justification for their use, and ideally alternate means are provided to assist
tool-based checkers determine flow of control and function call hierarchies. For
instance, if function pointers are used, it can become impossible for a tool to prove
absence of recursion, so alternate guarantees would have to be provided to make up
for this loss in analytical capabilities.
In a language like Haskell you wouldn't want to prove the absence of recursion, but that all recursions in the program fit into a handful of patterns. (Eg 'structural-recursion' or 'tail-recursion'.)
Some type systems are strong enough to put that kind of analysis / constraints directly into the language. (Haskell might already be strong enough with GADTs and other language extensions enabled.)
In any case, the Addendum at the end of the blog post provide a different perspective on the problem you mentioned.
If you're going to do this sort of thing with much success, you really need to have a language with a fairly powerful type system. If function pointers are your only option for higher-order programming, I wouldn't even try. First class functions or interface polymorphism help, but I'd also want to have a language that makes it relatively easy to create (and enforce) types so that your extension points don't end up being overly generic.
This is, as many commenters have noted, just another overzealous programming doctrine. Just like 'GOTO considered harmful.'
Here's the deal: if is a flow control primitive. Just like goto and while. If (heh) that primitive isn't high-level enough to handle the problem you are facing, it is incumbent upon you as a programmer to use another, higher level construct. That construct may be pattern matching, it may be polymorphism (or any other form of type-based dynamic dispatch). It may be a function that wraps a complex chain of repeated logic, and is handed lambdas to execute based upon the result. It may, as in the article given here, be a funtion that is handed lambdas which apply or do not apply the transformation described.
The point is, there are many branch constructs, or features that can be used as branch constructs, in most modern programming languages. Use the one that fits your situation. And if that situation isn't a that complex, that construct may be if.
Fizzbuzz using guards is the most clean and modifiable fizzbuzz that I've seen in Haskell.
Although now that I think about it, if you provide a function with a list of numbers...
"Bad IFs" are a code smell, and they're being scapegoated when the real problems are management demanding that simple hackish prototypes & tests be deployed into production, management that doesn't allow time for refactoring, and poor programmers who think that "bad IFs" are good code.
But the main site also doesn't do any reasonable job of defining what a "Bad IF" even is.
The crux of the matter is that programmers need time to craft the details of a project to avoid or correct technical debt. These sort of reactions just point out one tiny portion of technical debt itself and doesn't solve any fundamental problems at all.
(and yeah, I known I'm ranting against the Anti-IF campaign, not the particular take on the linked site. But this article just seems to parameterize the exact same parameters that are branched on anyway.)
I think that aiming at the management of the coders and the business users is putting the emphasis in exactly the right place. Once we get to womdering if eliminating IF stmts will help, we have passed by so many opportunities for 10x value delivery.
The idea that each type has its own control flow primitives is bothersome. It's taken over Rust:
argv.nth(1)
.ok_or("Please give at least one argument".to_owned())
.and_then(|arg| arg.parse::<i32>().map_err(|err| err.to_string()))
.map(|n| 2 * n)
I'm waiting for
date.if_weekday(|arg| ...)
Reading this kind of thing is hard. All those subexpressions are nameless, and usually comment-less. This isn't pure functional programming, either; those expressions can have side effects.
I don't agree here at all. The methods you show operate on an Optional, and it's incredibly common to perform those kinds of comparisons so it makes sense that they have convenience methods. This is not at all comparable to something like if_weekday.
This has not "taken over" rust. Result is another type that does this, but this makes sense for the same reasons.
Rust basically offers a Monad-like API there. That's perfectly fine and a well established pattern.
That has nothing to do with primitive control flow nor is that an indication of if_weekday appearing anytime soon.
That being said having primitive control flow implemented as methods also has precedent with languages like Smalltalk or Self. That may be unusual but I don't think that's necessarily bad. I would be interested in reading about why this is bad design though.
In Haskell, realizing that data flow and control flow are of the same spirit and that data structures are control structures is one of the key epiphanies to be had.
This article mentions 'if' and 'Boolean'. Loops and lists are another example. (And for the same reason that most languages make such extensive use of loops, Haskell programs can often have a lot of lists.)
The annoying part there is the repeated "|x| x.". Rust should have syntax to reference a method of an object, instead of having to write a wrapper. So it'd look like .map_err(???.to_string()).
Most of this should be obviated when the ? operator is ready. But until then, there is no primitive for 'work on the type you wrapped in Option, short-circuiting and returning None at the first sign of failure', so it has to be done in the library.
This just seems to obscure the logic. Not unlike how polymorphism can make code flow harder to read, though feel more clever.
There is a place for it - like when you're trying to express a set of logic that will be guarded by the same condition, but always at the cost of some complexity.
A set of conditionals is probably the most obvious way to express branching.
> The problem is computing the bit in the first place. Having done so, you have blinded yourself by reducing the information you have at hand to a bit, and then trying to recover that information later by remembering the provenance of that bit.
Thats why you use Lua, it lets you have multiple return values. So you can get a boolean back to let you know if the strings were the same, an int to know where they ceased matching and a boolean to let you know if they are case different. It's then up to the programmer to decide how much enlightenment they want.
The destroy all IF reminds me of GOTO considered harmful of the 70's. There are other ways to fix the problem.
Me too. Particularly because every programmer has their own idea of what a "right"/ideal style of programming is. Here, apparently, we must not use conditionals.
The more I write code the more I realize that the entire purpose of the code is to have some effect on reality, and the more reliably it can do this, the better the code. I find I code a lot better without design principles, because trying to remember which patterns are "good" and "bad" just obscures the attention I would have used to look at the code and sense whether something would work in this particular situation.
I recently did some refactoring at Google---we have a few bits of Haskell here and there---and did some similar things to what the author of the article proposed.
(Though the biggest impact of the refactoring was to remove two home-grown abstractions and a whole bunch of ad hoc transformations and replace them with the appropriate use of the very powerful, and well-understood Applicative.)
I read everything I could find on the Anti-IF site and didn't understand what the mission is exactly. They qualify and mention they want to remove the bad and dangerous IFs, but I couldn't find examples that differentiate between bad ones and good ones -- are there good ones according to this campaign?
I like using functional as much as anyone, and removing branching often does make the code clearer and remove the potential for mistakes.
But I admit I have a hard time with suggesting people prefer a lambda to an IF, or to not ever use an IF. A lambda is, both complexity wise, and performance wise, much heavier than an IF. And isn't is just as bad to abstract conditionals before any abstractions are actually called for?
I read everything I could find on the Anti-IF site and didn't understand what the mission is exactly.
I have a similar problem, in that every time I try to understand the perspective of functional-programming advocates, I find that the authors always seem to illustrate their points with examples like this:
match :: String -> Boolean -> Boolean -> String -> Bool
match pattern ignoreCase globalMatch target = ...
If I'm already literate in Haskell or Clojure or Brainfuck or whatever godawful language that is, then chances are, I'm already familiar with the strengths of the functional approach, and I'm consequently not part of the audience that the author is supposedly trying to reach.
So: are there any good pages or articles that argue for for functional programming where the examples can be followed by a traditional C/C++ programmer, or by someone who otherwise hasn't already drunk the functional Kool-Aid?
I tried to ask the author the follow: (kept getting deleted as spam). Perhaps he will see it here but its unlikely due to the fact there are many comments as it is.
Hi John,
Are you familiar with Jackson Structured Programming?
Notice how the focus in on using control flows that are derived from the structure of the data being processed and the processed data. Notice how the JSP derived solution in the Wikipedia example lack if-statements.
Pattern matching allows ones to map control flow to the structure of data. What are your thoughts on that? I think inversion of control has other benefits but I don't think it has much to do with elimination of `if` conditionals, the pattern matching does that.
Also, I noticed one thing:
In the article you mention `doX :: State -> IO ()` as being called for its value and suggest that if you ignore the value the function call has no effect. Isn't it the case that a function of that type usually denotes that one is calling the function for its effect and not for any return value? Its value is just an unevaluated `IO ()`.
The return value of the function is a description of an effect. Calling the function doesn't cause the effect to happen. That's why you could, for example, call the function many times and get a list of IO actions which you then execute in parallel or backwards or whatever. Hence "inversion of control".
The author seems to ignore the fact that passing lambdas like this merely changes where the IF or SWITCH statement is made. I can agree that passing functions instead of booleans is better and more general. But pretending that IF/SWITCH are thus avoided, is delusional.
For instance, at some point there will be a decision made whether the string matching must be case sensitive or not. If the program can do both at runtime, the IF will be, perhaps, in the main (or equiv.).
Good writing has one clear imperative: communicate meaningfully the intent of the author to the reader. Good code is no different; it is merely expressive writing in a different language, with, perhaps, greater constraint on its intent.
Some people make up rules like "don't use adverbs", or "don't split infinitives", in an effort to write better. But this doesn't necessarily produce good writing; sometimes an adverb is just what you need.
The same is true of code. These are useful things to think about, but "destroy all ifs" is akin to "never use a conjunction".
If I understood correctly, the article suggests that as a general principle you should replace your union types and case-by-case code with lambdas. I feel almost the opposite.
Article: "In functional programming, the use of lambdas allows us to propagate not merely a serialized version of our intentions, but our actual intentions!"
Counterpoint: The use of structured objects instead of black box lambdas allows us to do more than just evaluate them. For example, Redux gets a lot of power by separating JSON-like action objects from the reducer that carries out the action.
But let's take instead the article's example of case-insensitive string matching. One tricky case is that normalization can change the length of the string: we might want the german "ß" to match "SS". Sure, the lambda approach can handle that. But now suppose that we want a new function that gives the location of the first match. It should support the same case-sensitivity options (because why not?). But now there is no way to get the pre-normalization location, because we encoded our normalization as a black box function. Case-by-case code would have handled this easily.
The first problem is that the "match" function is considered in the first place. It's too general. It should only be used in higher order constructs where its flexibility is actually needed.
Second: The enum based refractor is actually valuable and fine IMO. If you need string functions, stop there.
Now, shipping control flow as a library is a cool feature of Haskell. But, if those arguments are turned into functions, the match function itself isn't needed! It just applies the first argument to arguments 3 and 4, then passes them to the second argument.
match :: (a -> b) -> (b -> b -> Bool) -> a -> b
match case sub needle haystack = sub (case needle) (case haystack)
Does that even need to be a function? Perhaps. But if so, it's typed in a and b and functions thereof, and no longer a "string" function at all. And, honestly, why are you writing that function?
Typing it out where you need it is typically less mental impact, because I don't need to worry about the implementation of a fifth symbol named "match."
Isn't this exactly the Smalltalk way? In ST what looks like if-statements actually are messages passed to instances of Boolean, with lambdas (in Smalltalk: BlockClosures) as argument. The boolean then makes the decision whether it will evaluate the lambda or not.
The inversion of control flow from called to calling function is an
interesting way to describe (part of) functional programming style. I hadn't
thought of that this way, even though I use it for quite some time.
General principle: for every possible refactoring, the opposite refactoring is sometimes a good idea.
So, yes, replacing booleans with a callback is sometimes a good idea. But in other situations, replacing a callback with a simple booleans might also be a good idea.
Also, advice like this is often language-specific. In languages whose functions support named parameters, boolean flags are easy to use and easy to read. If you only have positional parameters, it's more error-prone, so you might want to pass arguments using enums or inside a struct instead.
[+] [-] barrkel|9 years ago|reply
The straw man in the post - talking about a case-sensitive matcher that selectively called one of two different functions based on a boolean - is indeed trivially converted into calling a single function passed as an argument, but it's hard to say that it's an improvement. Now the knowledge of how the comparison is done is inlined at every call point, and if you want to change the mechanism of comparison (perhaps introduce locale sensitive comparison), you need to change a lot more code.
That's one of the downsides of over-abstraction and over-generalization: instead of a tool, a library gives you a box of kit components and you have to assemble the tool yourself. Sure, it might be more flexible, but sometimes you want just the tool, without needing to understand how it's put together. And a good tool for a single purpose is usually surprisingly better than a multi-tool gizmo. If you have a lot of need for different tools that have similar substructure, then compromises make more sense.
This is just another case of the tradeoff between abstraction and concreteness, and as usual, context, taste and the experience of the maintainers (i.e. go with what other people are most likely to be familiar with) matters more than any absolute dictum.
[+] [-] rrradical|9 years ago|reply
It seems like every time someone writes an article on how to write better code, there are responses about how it doesn't make sense when taken to some logical extreme, or some special case, as if that invalidates the argument. (FP techniques in particular seem to provoke this.) But code design is like other design disciplines-- good techniques aren't always absolutes.
Do you really think that because the given example doesn't apply to every situation it's a 'straw man'? It is a little tiring to hear all code design advice dismissed this way.
[+] [-] danidiaz|9 years ago|reply
This is trivially true, any datatype can be encoded as a function. The post is not saying that we can pass any type of lambda whatsoever, but that we should pass lambdas that implement the required functionality.
> The straw man in the post - talking about a case-sensitive matcher that selectively called one of two different functions based on a boolean - is indeed trivially converted into calling a single function passed as an argument, but it's hard to say that it's an improvement. Now the knowledge of how the comparison is done is inlined at every call point
If call sites shouldn't choose wich lambda (or boolean) to pass, simply define a new function that always passes the same lambda to the original function, and use it everywhere. (This could also be a good case for partial application.)
[+] [-] userbinator|9 years ago|reply
...and a framework is likely to give you a box of components to build a tool-making factory factory factory...
http://discuss.joelonsoftware.com/?joel.3.219431.12
[+] [-] buffyoda|9 years ago|reply
Let's take the following function invocation, which can be expressed with Boolean literals or Church encoded booleans, I don't care:
If you want to determine the significance of the boolean values passed to this function, it does not suffice to go to the definition of 'true' or the definition of 'false'.Now take something like this:
Even though I have used descriptive names here, it's almost beside the point; I could just have easily have used nonsense names: If you want to know what 'foobar' means, you can go to its definition, and see how it preprocesses a string and a pattern. You don't have to guess about the meaning of a bit.As a result, the semantics of 'match' and its parameters are all communicated more clearly, with less room for error, and much more generality.
There need not be any syntactic overhead: it is merely the replacement of some flag with a lambda which cleanly encapsulates the effect that would otherwise be encoded in the flag. The way you invoke the function is the same, but instead of twiddling bits to get what you want, you pass functions whose meaning does not require (as much) subjective and possibly error-prone interpretation.
Note this also objectively simplifies the functions themselves, because they formerly contained conditional logic, but once you rip that out and give them no choice (invert the control!), they have less room to err, which makes them easier to get right, easier to maintain, and easier to test.
There is also another way to view the issue: with booleans, we first encode our intentions into a data structure (at the caller site), and then we decode the data structure into intentions (at the callee site).
Well, why are we packing and unpacking our intentions into data structures? Why not just pass them through?
Indeed, we do that by pulling out the code and propagating it to the caller site (possibly with names so you don't need significantly different syntax and can benefit from reuse). Then our code more directly reflects our intentions, because we're not serializing them into and out of bits.
I think the general principle applies to more than booleans, but it's easiest to see with booleans.
[+] [-] dwb|9 years ago|reply
[+] [-] tome|9 years ago|reply
#notallifs
[+] [-] meshko|9 years ago|reply
[+] [-] frozenport|9 years ago|reply
I couldn't agree more, and this is why I think most FP programs are about as intellectual stimulating as `std::min_element`
[+] [-] ufo|9 years ago|reply
Basically, if you structure the control flow in object oriented style (or church encoding...) then its easy to extend your program with new "classes" but if you want to add a new methods then you must go back and rewrite all your classes. On the other hand, if you use if-statements (or switch or pattern matching ...) then its hard to add new "classes" but its very easy to add new "methods".
I'm a bit disappointed that this isn't totally common knowledge by now. I think its because until recently pattern matching and algebraic data types (a more robust alternative to switch statements) were a niche functional programming feature and because "expression problem" is not a very catchy name.
[+] [-] userbinator|9 years ago|reply
and because "expression problem" is not a very catchy name.
It's also not particularly descriptive either, but the page mentions that it's a form of "cross-cutting concern", to which the table-oriented approach basically says "do not explicitly separate the concerns."
(More discussion and an article on that approach here: https://news.ycombinator.com/item?id=9406815 )
As a bit of a fun fact, doing table-oriented stuff in C is one of the few actual uses for a triple-indirection. :-)
[+] [-] kazinator|9 years ago|reply
Programs that are full of function indirection aren't necessarily easier to understand than ones which are full of boolean conditions and if.
The call graph is harder to trace. What does this call? Oh, it calls something passed in as an argument. Now you have to know what calls here if you want to know what is called from here.
A few days ago, there was this HN submission: https://news.ycombinator.com/item?id=12092107 "The Power of Ten – Rules for Developing Safety Critical Code"
One of the rules is: no function pointers. Rationale: Function pointers, similarly, can seriously restrict the types of checks that can be performed by static analyzers and should only be used if there is a strong justification for their use, and ideally alternate means are provided to assist tool-based checkers determine flow of control and function call hierarchies. For instance, if function pointers are used, it can become impossible for a tool to prove absence of recursion, so alternate guarantees would have to be provided to make up for this loss in analytical capabilities.
[+] [-] eru|9 years ago|reply
Some type systems are strong enough to put that kind of analysis / constraints directly into the language. (Haskell might already be strong enough with GADTs and other language extensions enabled.)
In any case, the Addendum at the end of the blog post provide a different perspective on the problem you mentioned.
[+] [-] bunderbunder|9 years ago|reply
If you're going to do this sort of thing with much success, you really need to have a language with a fairly powerful type system. If function pointers are your only option for higher-order programming, I wouldn't even try. First class functions or interface polymorphism help, but I'd also want to have a language that makes it relatively easy to create (and enforce) types so that your extension points don't end up being overly generic.
[+] [-] qwertyuiop924|9 years ago|reply
Here's the deal: if is a flow control primitive. Just like goto and while. If (heh) that primitive isn't high-level enough to handle the problem you are facing, it is incumbent upon you as a programmer to use another, higher level construct. That construct may be pattern matching, it may be polymorphism (or any other form of type-based dynamic dispatch). It may be a function that wraps a complex chain of repeated logic, and is handed lambdas to execute based upon the result. It may, as in the article given here, be a funtion that is handed lambdas which apply or do not apply the transformation described.
The point is, there are many branch constructs, or features that can be used as branch constructs, in most modern programming languages. Use the one that fits your situation. And if that situation isn't a that complex, that construct may be if.
Fizzbuzz using guards is the most clean and modifiable fizzbuzz that I've seen in Haskell.
Although now that I think about it, if you provide a function with a list of numbers...
[+] [-] white-flame|9 years ago|reply
"Bad IFs" are a code smell, and they're being scapegoated when the real problems are management demanding that simple hackish prototypes & tests be deployed into production, management that doesn't allow time for refactoring, and poor programmers who think that "bad IFs" are good code.
But the main site also doesn't do any reasonable job of defining what a "Bad IF" even is.
The crux of the matter is that programmers need time to craft the details of a project to avoid or correct technical debt. These sort of reactions just point out one tiny portion of technical debt itself and doesn't solve any fundamental problems at all.
(and yeah, I known I'm ranting against the Anti-IF campaign, not the particular take on the linked site. But this article just seems to parameterize the exact same parameters that are branched on anyway.)
[+] [-] lifeisstillgood|9 years ago|reply
[+] [-] nilved|9 years ago|reply
[+] [-] Animats|9 years ago|reply
[+] [-] ryeguy|9 years ago|reply
This has not "taken over" rust. Result is another type that does this, but this makes sense for the same reasons.
[+] [-] DasIch|9 years ago|reply
That has nothing to do with primitive control flow nor is that an indication of if_weekday appearing anytime soon.
That being said having primitive control flow implemented as methods also has precedent with languages like Smalltalk or Self. That may be unusual but I don't think that's necessarily bad. I would be interested in reading about why this is bad design though.
[+] [-] eru|9 years ago|reply
This article mentions 'if' and 'Boolean'. Loops and lists are another example. (And for the same reason that most languages make such extensive use of loops, Haskell programs can often have a lot of lists.)
[+] [-] MichaelGG|9 years ago|reply
[+] [-] koenigdavidmj|9 years ago|reply
[+] [-] kzrdude|9 years ago|reply
[+] [-] grblovrflowerrr|9 years ago|reply
[+] [-] dfox|9 years ago|reply
[+] [-] grblovrflowerrr|9 years ago|reply
[deleted]
[+] [-] throwaway13337|9 years ago|reply
There is a place for it - like when you're trying to express a set of logic that will be guarded by the same condition, but always at the cost of some complexity.
A set of conditionals is probably the most obvious way to express branching.
[+] [-] dwrensha|9 years ago|reply
An excerpt:
> The problem is computing the bit in the first place. Having done so, you have blinded yourself by reducing the information you have at hand to a bit, and then trying to recover that information later by remembering the provenance of that bit.
[+] [-] AstroJetson|9 years ago|reply
The destroy all IF reminds me of GOTO considered harmful of the 70's. There are other ways to fix the problem.
[+] [-] GFK_of_xmaspast|9 years ago|reply
[+] [-] lilbobbytables|9 years ago|reply
[+] [-] runeks|9 years ago|reply
The more I write code the more I realize that the entire purpose of the code is to have some effect on reality, and the more reliably it can do this, the better the code. I find I code a lot better without design principles, because trying to remember which patterns are "good" and "bad" just obscures the attention I would have used to look at the code and sense whether something would work in this particular situation.
[+] [-] danbolt|9 years ago|reply
My gut thinks the solutions will be a little more boring than our inner magpies will want to admit.
[+] [-] eru|9 years ago|reply
(Though the biggest impact of the refactoring was to remove two home-grown abstractions and a whole bunch of ad hoc transformations and replace them with the appropriate use of the very powerful, and well-understood Applicative.)
[+] [-] moron4hire|9 years ago|reply
[+] [-] dahart|9 years ago|reply
I like using functional as much as anyone, and removing branching often does make the code clearer and remove the potential for mistakes.
But I admit I have a hard time with suggesting people prefer a lambda to an IF, or to not ever use an IF. A lambda is, both complexity wise, and performance wise, much heavier than an IF. And isn't is just as bad to abstract conditionals before any abstractions are actually called for?
[+] [-] CamperBob2|9 years ago|reply
I have a similar problem, in that every time I try to understand the perspective of functional-programming advocates, I find that the authors always seem to illustrate their points with examples like this:
If I'm already literate in Haskell or Clojure or Brainfuck or whatever godawful language that is, then chances are, I'm already familiar with the strengths of the functional approach, and I'm consequently not part of the audience that the author is supposedly trying to reach.So: are there any good pages or articles that argue for for functional programming where the examples can be followed by a traditional C/C++ programmer, or by someone who otherwise hasn't already drunk the functional Kool-Aid?
[+] [-] externalreality|9 years ago|reply
Hi John,
Are you familiar with Jackson Structured Programming?
https://en.wikipedia.org/wiki/Jackson_structured_programming
Notice how the focus in on using control flows that are derived from the structure of the data being processed and the processed data. Notice how the JSP derived solution in the Wikipedia example lack if-statements.
Pattern matching allows ones to map control flow to the structure of data. What are your thoughts on that? I think inversion of control has other benefits but I don't think it has much to do with elimination of `if` conditionals, the pattern matching does that.
Also, I noticed one thing:
In the article you mention `doX :: State -> IO ()` as being called for its value and suggest that if you ignore the value the function call has no effect. Isn't it the case that a function of that type usually denotes that one is calling the function for its effect and not for any return value? Its value is just an unevaluated `IO ()`.
[+] [-] mbrock|9 years ago|reply
[+] [-] AYBABTME|9 years ago|reply
For instance, at some point there will be a decision made whether the string matching must be case sensitive or not. If the program can do both at runtime, the IF will be, perhaps, in the main (or equiv.).
[+] [-] astazangasta|9 years ago|reply
Good writing has one clear imperative: communicate meaningfully the intent of the author to the reader. Good code is no different; it is merely expressive writing in a different language, with, perhaps, greater constraint on its intent.
Some people make up rules like "don't use adverbs", or "don't split infinitives", in an effort to write better. But this doesn't necessarily produce good writing; sometimes an adverb is just what you need.
The same is true of code. These are useful things to think about, but "destroy all ifs" is akin to "never use a conjunction".
[+] [-] MrManatee|9 years ago|reply
Article: "In functional programming, the use of lambdas allows us to propagate not merely a serialized version of our intentions, but our actual intentions!"
Counterpoint: The use of structured objects instead of black box lambdas allows us to do more than just evaluate them. For example, Redux gets a lot of power by separating JSON-like action objects from the reducer that carries out the action.
But let's take instead the article's example of case-insensitive string matching. One tricky case is that normalization can change the length of the string: we might want the german "ß" to match "SS". Sure, the lambda approach can handle that. But now suppose that we want a new function that gives the location of the first match. It should support the same case-sensitivity options (because why not?). But now there is no way to get the pre-normalization location, because we encoded our normalization as a black box function. Case-by-case code would have handled this easily.
[+] [-] jwatte|9 years ago|reply
Second: The enum based refractor is actually valuable and fine IMO. If you need string functions, stop there.
Now, shipping control flow as a library is a cool feature of Haskell. But, if those arguments are turned into functions, the match function itself isn't needed! It just applies the first argument to arguments 3 and 4, then passes them to the second argument.
match :: (a -> b) -> (b -> b -> Bool) -> a -> b match case sub needle haystack = sub (case needle) (case haystack)
Does that even need to be a function? Perhaps. But if so, it's typed in a and b and functions thereof, and no longer a "string" function at all. And, honestly, why are you writing that function?
Typing it out where you need it is typically less mental impact, because I don't need to worry about the implementation of a fifth symbol named "match."
sub (case needle) (case haystack)
[+] [-] galaxyLogic|9 years ago|reply
[+] [-] vittore|9 years ago|reply
[+] [-] dozzie|9 years ago|reply
[+] [-] skybrian|9 years ago|reply
So, yes, replacing booleans with a callback is sometimes a good idea. But in other situations, replacing a callback with a simple booleans might also be a good idea.
Also, advice like this is often language-specific. In languages whose functions support named parameters, boolean flags are easy to use and easy to read. If you only have positional parameters, it's more error-prone, so you might want to pass arguments using enums or inside a struct instead.
[+] [-] nialv7|9 years ago|reply