top | item 20889579

(no title)

arc33 | 6 years ago

I think the simplest way to understand recursion is as a for loop where you use the stack as the counter.

Recursion is basically implementing the operations of a repetitive loop but the loop controls are not explicit like in a for loop, instead you use the stack as the counter of the loop.

Once you think of recursion as just another way to do for loops, it immediately is demystified.

discuss

order

zwkrt|6 years ago

I don't know if I would consider this easy for a beginner, but it does lead to an interesting result in the 'contrapositive' case:

Any recursive solution can be turned into an iterative solution if you store the arguments of the recursive function in a data structure on the heap, and turn the recursive call itself into an access/modification of the data structure. As a specific example, any tail-recursive function can be turned into a for loop that modifies a stack. Recursive functions that are not tail-recursive (such as fibonacci) will require more complex data structures, depending on their internal recursive structure. (fibonacci can use an indexable list, for instance).

This is the heart of memoization and dynamic programming.

DonaldPShimoda|6 years ago

> Once you think of recursion as just another way to do for loops, it immediately is demystified.

I actually think this is not a good way to understand recursion. It's like saying "once you think of the lambda calculus as another way to build a Turing machine, it immediately is demystified." Yes, it's true that they're equally powerful, but their differences are what make one or the other more suitable to certain circumstances.

I think with recursion the best way to show this distinction is to look at navigation over a recursive structure, like a binary tree. You could implement any navigation over a binary tree with some nice loops, but they don't really mean anything in relation to the program: they're just a procedure to get the job done.

On the other hand, since binary trees are recursively defined (each node is either a leaf containing data or a branch containing two binary trees), using structural recursion to navigate the tree is significantly more straightforward than any looping construct. This is made super clear in a language like Haskell:

    data BTree
      = Leaf Int
      | Branch Int BTree BTree
    
    sumTree :: BTree -> Int
    sumTree (Leaf v) = v
    sumTree (Branch v l r) = v + (sumTree l) + (sumTree r)
But in, say, Python:

    @dataclass
    class BTree: pass
    
    @dataclass
    class Leaf(BTree):
      v: int
    
    @dataclass
    class Branch(BTree):
      v: int
      l: BTree
      r: BTree
    
    def sum_tree(t: BTree) -> int:
      stack = [t]
      total = 0
      while stack:
        curr = stack.pop()
        if isinstance(curr, Leaf):
          total += curr.v
        elif isinstance(curr, Branch):
          total += curr.v
          stack.append(curr.l)
          stack.append(curr.r)
        else:
          raise RuntimeError("Invalid BTree subclass.)
      return total
The Python solution isn't obviously correct. You'll have to read through that and make sure I implemented my traversal correctly — assuming you know what an iterative traversal looks like and you can recognize it without reading the code more fully first.

In contrast, the Haskell solution is clearly correct at a glance, I think. Ignoring the difference in length of code, the Haskell is clearly recursing over the structure of the tree. This is, in my opinion, significantly easier to reason about than any amount of looping.

I guess the caveat to all this is: recursion is well-suited for certain kinds of problems (such as navigating a recursive datatype), and loops are well-suited for different kinds of problems. But saying "they're equivalent so think of all recursion as loops" is missing the forest for the trees, I think.

arc33|6 years ago

[deleted]