top | item 5480183

Imperative vs. Declarative

66 points| philip_roberts | 13 years ago |latentflip.com

51 comments

order

hackinthebochs|13 years ago

I'm not sure I agree with some of the examples in the article. The examples used paint declarative programming as basically abstractions over details. The problem is that there is no line where an abstraction crosses the boundary into declarative programming. It's not really about abstractions but about control flow. If your code has a specific order it has to run in, then its imperative as you're still describing the steps needed to perform the action. SQL is declarative because you're describing the output set rather than the steps to generate it. Functional languages are considered declarative because of the fact that pure functions can be rewritten, optimized, lazy evaluated, etc by the runtime. I have a hard time considering map/reduce/etc in isolation as examples of declarative programming, as they're usually used in conjuction with an algorithm that most definitely has a defined execution order.

richardwhiuk|13 years ago

I agree. I don't recognize the examples here as declarative programming.

SQL and Prolog are both examples of declarative programming, so is Make to an extent. Using a map function doesn't make JavaScript a declarative programming language - it's a functional programming concept, not a declarative one.

camus|13 years ago

so what would be declarative programming ? does it even exist? i mean, at some point you need to write some logic , and logic is imperative. Let's take a html file. It is declarative. but the underlying logic is written somewhere else. so you cant really have pure declarative programming? if it is possible how ?

MattRogish|13 years ago

I really like SQL. Sure, the language has warts but the ability to concisely represent WHAT you want, not HOW you want it, makes it very readable once you understand the simple constructs and how to properly design tables and indexes (not very hard).

For example, consider the problem of finding the second largest value in a set.

In SQL, I'd do something like:

  SELECT MAX( col )
    FROM table
   WHERE col < ( SELECT MAX( col )
                   FROM table )
  
It's pretty readable, and can almost be read in plain english: "Get the next biggest value from the table where it's smaller than the biggest value."

How might you do this in Java? http://stackoverflow.com/questions/2615712/finding-the-secon...

But look at all the other ways you can do it in that thread. None of them are very readable. And, they can hide subtle bugs that you won't find just by reviewing the code.

Ruby has a pretty concise example if you happen to know the trick, and that the sort performance isn't miserable (kind of a gotcha question): http://stackoverflow.com/questions/8544429/find-second-large...

This is a very simple example, but as you scale up to more complex problems I almost always find SQL is fewer lines of code, more readable, and far less buggy (SQL tends to either work or not work - I find much more subtle bugs in a 30 line Java algo than a 10 line SQL command).

masklinn|13 years ago

FWIW here's a very similar Python version:

    max(col for col in table if col < max(table))
a fun one is this:

    first, second = max(permutations(table), 2)
although its semantics are slightly different (if the maximum of the table is duplicated, it'll be returned for both slots)

or using heapq which notaddicted mentioned.

RyanMcGreal|13 years ago

Aside: Python is similar to Ruby, albeit using the sorted() function rather than the sort() method.

    sorted(vals)[-2]

tel|13 years ago

    maximum $ filter (< maximum xs) xs
or like the Ruby one, but with better error semantics

    (`atMay` 1) . reverse . sort
since we don't know that there exists such a column.

hcarvalhoalves|13 years ago

The important part is that abstracting the implementation from the declaration, the DBMS is free to compute this using whatever index, sorts and memory allocation is has too. So I think the way you do this in other languages doesn't even compare, because it's not really the same thing.

The effect is you end up with a declaration that is highly intelligible, exactly because you don't have to write the implementation.

octo_t|13 years ago

Prolog is declarative programming take to the maximum (excluding things like Answer Set Programming/clingo etc).

In Prolog you ask questions. For example: subset([1,2],[2]).

then it goes away and says "yes". Or you want to know if any subsets exist: subset([1,2],B).

B = [] B = [1] B = [2]

This makes it really really nice for some surprising tasks (Windows NT used to ship with a prolog interpreter for setting up the network)

xixixao|13 years ago

I would disagree - in a sense, Prolog is less declarative than Haskell. For example, the order of "procedure calls" matters in Prolog, a sign of imperative programming. There is no such thing in Haskell (unless imperative behavior is being simulated with Monads).

no-op|13 years ago

If you're familiar with or interested in Prolog, I would definitely recommend checking out Mercury. The language home page was just migrated and they're having broken link issues, but here's a link: http://www.mercurylang.org/ Also, you can check out the wikipedia page for a quick summary: http://en.wikipedia.org/wiki/Mercury_programming_language

The language has a lot of functionality that Prolog doesn't have and (thanks to a strong typing system) performs much better. It just needs a bigger community to support it.

philip_roberts|13 years ago

Ooh, I forgot all about prolog!

I worked through the 7 languages in 7 weeks book, and solving a sudoku with prolog blew my mind. I think the first "real" programming I did was a sudoku solver in Excel and VBScript (yeuch).

pacaro|13 years ago

Or to take his doubling example...

    double([], []).
    double(H | T, H2 | T2) :- H2 is H * 2, double(T, T2).

icebraining|13 years ago

Map and other functional constructs may be declarative, but I only "feel" like I'm programming declaratively when I'm coding in a language like Prolog.

The fact that, with unification and backtracking, you can not only get a result for a query, but also "pass a variable" as an argument and get a possible value makes it seem much more like a mathematical expression and less like a computation.

For example, I can define some relations:

  parent_of(john, mary).
  parent_of(mary, eve).

  grandparent_of(X, Y) :- parent_child(X, Z), parent_child(Z, Y).
And then I can simply run a query:

  ?- grandparent_of(john, eve).
  Yes
But I can also make it fill in the value for me:

  ?- grandparent_of(john, X).
  X = eve
'grandparent_of' is not some piece of code, it's an actual declaration of a relation between the terms.

Of course, you can do unification and backtracking in other languages, but Prolog is designed for it.

PeterisP|13 years ago

On the flip side, it also drastically changes the typical errors.

In imperative style, most of your mistakes or carelessness will usually mean that the machine makes a wrong result or crashes in the process - a bad 'what'.

In declarative style, most of your mistakes or carelessness will usually mean that the machine will take a bazillion times less efficient way trying to make that result, possibly taking 'forever' or running out of memory - i.e. a bad 'how'.

JoshTriplett|13 years ago

I've found in declarative style, most mistakes just turn into compilation errors.

That said, the "why did it choose that terrible implementation?" problem does occasionally come up in declarative programming, and inherently never comes up in imperative.

Uchikoma|13 years ago

I don't think the author gets declarative right. It feels like he bolts a cool word onto some things he uses. Call me old fashioned, but I think Prolog is declarative, map() and reduce() are not.

hermannj314|13 years ago

Lately, when I code in C#, I write the code I wish was possible with the goal of trying to code to the problem as stated in the requirements. This way the code that solves the problem looks almost exactly like the description of the problem. That is step #1.

Step #2 is doing whatever is necessary to make that code work. Sometimes this means using the more interesting stuff like reflection, dynamic objects, expression tree visitors, etc. but I find that subsequent issues keep getting easier to deal with. This is because step #1 is naturally building a DSL for your problem domain and you start to find that what you did in step #2 is quite reusable.

I've been programming for a while, so I have experience with the imperative, "write the code that solves the problem" approach and it works too, but I am having fun with the "write the code that describes the problem" approach more.

Just my two cents.

hackinthebochs|13 years ago

This is what I love about C#, it really provides all the necessary components to do this style of "wishful development". I've started doing this everywhere and the results are great. Like you said, the code itself literally reads like a specification. It's allowed me to turn what is otherwise would have been an extremely tedious web application into something I enjoy working on.

As an example, I turned what otherwise would have been an extremely tedious exercise in writing tons of obscure SQL (creating reports from a very non-standard database layout) into an API for creating reports that is literally like reading the specification for the report. And all of it was done in about 250 lines of C#. And to top it off we still have complete static type checking! I really cannot sing the praises of C# enough.

taeric|13 years ago

It is not just as programmers. Consider, most cookbooks. Then consider the directions that come with Ikea furniture. Of course, the real beauty of both of those examples, is that they are a mix of declarative and imperative instructions.

For some reason, it seems we programmers are adamant that it must be one or the other. Consider all of the examples, either purely imperative or purely declarative. Why not both?

hcarvalhoalves|13 years ago

Ideally, we would all write programs by assembling declarations, imperative code would be limited to internal implementations. That's largely the reason it's good practice to abstract away implementation behind APIs - what you have left is almost a pure declarative language, or DSL, that maps 1:1 your problem domain, without looping or branching or I/O (which are computation details).

Taking the example from the original article, it would be more akin to:

    // Implementation
    function double(n) {
      return n * 2;
    }

    // Declaration
    [1,2,3,4,5].map(double)
    => [2,4,6,8,10]

camus|13 years ago

well one declares variables:

    - given a pan
    - given 3 eggs
    - given a little of olive oil
then one executes orders :

    - add the oil in the pan
    - break the eggs and add them in the pan
    - cook the eggs fo 5 mins
    - serve the food hot
so i guess one is never really doing either pure declarative of imperative cooking/programming ?

I like the cookbook metaphore for programming.

toki5|13 years ago

Great article, but one thing that's sort of glossed over here, and that I half-disagree with, is this:

>But we also get to think and operate at a higher level, up in the clouds of what we want to happen, and not down in the dirty of how it should happen.

The author mentions this at the end, but I feel it should be stressed more strongly: The dirty of how is important. The author presents a big "if" here, which is: if the function we've built to abstract away some ugliness performs in the best, most efficient way possible, with no drawbacks, then, yes, abstracting that functionality away and forgetting it is okay.

But to me that's a big if. It is just as important to me to understand and recognize that map is fast, efficient, and to understand why it's fast and efficient, so that someday, if you come across a situation where map does not apply, you will know why, and you'll be able to use something better.

Being up in the clouds all the time is, to me, a pipe dream -- we must always be cognisant of the ground on which we built this tower of abstraction layers.

saraid216|13 years ago

The fact that map is fast and efficient isn't really its selling point to me, though. It's that it's a simple concept, so I use it when the concept applies, not when I need to worry about efficiency. So it doesn't matter to me how it works, as long as it does what I expect.

jonjaques|13 years ago

OP could definitely have used more examples, but I think he's on the right track. Where declarative or functional programming comes in really handy is composition. Underscore has a lot of utilities that make it easy.

  var genericFilter = function(type, value) {
    return function(items) {
      return _.filter(items, function(i) {
        return i[type] === value;
      });
    }
  };

  var sizeFilter = genericFilter('size', selectedSize);
  var brandFilter = genericFilter('brand', selectedBrand);

  var appliedFilters = _.compose(sizeFilter, brandFilter);

  var filteredItems = appliedFilters(items);
  // which ends up doing  sizeFilter(brandFilter(items));
// edit for sloppy code;

simonv3|13 years ago

I work with a bunch of UX designers, and as the only developer here I'm often confronted with their question of "why can't I just describe what I want done?"

Their apprehension of tackling code is one I don't immediately understand, but I do get that they don't want to think about the how, rather the what. It's a funny parallel.

Here's a great video by Bret Victor who saw this problem, and tried to fix it for animation:

https://vimeo.com/36579366#t=1748

iambot|13 years ago

I prefer the imperative style personally, I like things done the way I want ... I kid, great write-up though.

ExpiredLink|13 years ago

What is the result of procedural programming? Functions that can be used declaratively! The purpose of procedural programming is to encapsulate and consequently eliminate "telling the 'machine' how to do something".

PS: What happened to 3GL vs. 4GL?