Most programmers are comfortable with loops. I think that is because loops are a necessary flow construct in most imperative programming languages. But when doing high level programming, recursion has several key advantages:
- logic is more precise and short
- code is more expressive
- there is no or minimal overhead where potential bugs could occur
- logic is isolated
- reading code doesn’t require evaluating loops in your head while remembering previous states of variables
Yet it is still argued that loops are, on occasion, superior where programmer has a choice. One common example seems to be the Fibonacci function and one such case is “Ansi Common Lisp” by Paul Graham. Here’s the quote from the book:
The other issue to bear in mind is that the obvious recursive algorithm is not always the most efficient. The classic example is the Fibonacci function. It is defined recursively,
- Fib(0) = Fib(1) = 1.
- Fib(n) = Fib(n – 1) + Fib(n – 2).
but the literal translation of this definition,
(defun fib (n)
(if (<= n 1)
1
(+ (fib (- n 1))
(fib (- n 2)))))is appallingly inefficient. The same computations are done over and over. If you ask for (fib 10), the function computes (fib 9) and (fib 8). But to compute (fib 9), it has to compute (fib 8) again, and so on.
Here’s an iterative function that computes the same result:
(defun fib (n)
(do ((i n (- i 1))
(f1 1 (+ f1 f2))
(f2 1 f1))
((<= i 1) f1)))The iterative version is not as clear, but it is far more efficient. How often does this kind of thing happen in practice? Very rarely – that’s why all textbooks use the same example – but it is something one should be aware of.
The iterative version is “not as clear?” That’s a severe understatement. Try reading it – at first glance it’s just a bunch of gibberish. It takes reading it over and over few times to understand the logic. Pay attention to how f1 is defined in terms of f2; and then f2 is defined in terms of f1. Then it takes another massive wave of thought to evaluate the loop several times in your head to validate its logic. Are you sure there’s no bug, or you haven’t left out an edge case? It’s not as easy to tell. And this function is just adding 2 numbers. Imagine what happens when you actually need to do something more complex! But you don’t have to imagine – if you’re reading this post, you probably have written loops that are a lot worse than this.
Paul Graham is right that the recursive implementation as he provided is very inefficient. But by giving such argument for the iterative logic, it gives readers a false sense of understanding that in these types of cases one should always pick an iterative solution. This bar then becomes completely arbitrary, and most programmers, because of aforementioned existing habits, write iterative logic.
Instead, we should write Fibonacci function in recursive format that is both expressive, concise and efficient. Many programming books I read concentrate on the factorial function. So I thought I’d give a shot to writing a Fibonacci function without looking it up, especially since I’m new to and still learning Lisp. So this is initially what I came up with:
(fib-raw n 1 1 1))
(defun fib-raw (n f0 f1 cur)
(cond ((eql n 0) 1)
((eql n 1) 1)
((eql cur n) f1)
(t (fib-raw n f1 (+ f0 f1) (+ cur 1)))))
The fib function provides the interface, and the main computation is happening in the fib-raw function. The fib-raw function takes 3 extra arguments: f0 and f1 are previous 2 Fibonacci calculations; and cur is the current number we’re on. n remains the number we’re trying to get to.
Once these arguments are understood, it is really painless to read the function:
- if n is 0 or 1 the answer is 1
- if current number is n (the number we want) the answer is the last computed Fibonacci number – f1
- otherwise, re-run the same logic, but increment cur and add 2 previous Fibonacci numbers to compute a new one
It also closely follows the original definition provided in the book. Looks good, right? It’s easy to read and efficient too.
But wait, we can make this simpler, because Lisp supports optional arguments with default values:
(cond ((eql n 0) 1)
((eql n 1) 1)
((eql cur n) f1)
(t (fib n f1 (+ f0 f1) (+ cur 1)))))
So now it’s just one function and the 3 extra arguments are optional.
But why stop there? Looking at it in this form, we can see that the initial default values of arguments f0 and f1 act like seeds to the function, so we might as well use them for the 0 and 1 cases:
(cond ((eql n 0) f0)
((eql n 1) f1)
((eql cur n) f1)
(t (fib n f1 (+ f0 f1) (+ cur 1))))
Thinking about this further, what are 0 and 1 really? They are arbitrary basis upon which the rest of the computations are done. There’s no need to have them hard-coded in the function – we are already supplying their results, we might as well supply their values:
(cond ((eql n n0) f0)
((eql n n1) f1)
((eql cur n) f1)
(t (fib n f1 (+ f0 f1) n0 n1 (+ cur 1))))
Here I added n0 and n1 optional arguments to specify base values. They default to 0 and 1, but now we can specify any base values when we run it.
This last change allows us to make the computation even more efficient by specifying a larger base. For example, if we knew all our requests are going to be higher than 20, we can seed the function with a known Fibonacci number of 20 and 21 like this:
(cond ((eql n n0) f0)
((eql n n1) f1)
((eql cur n) f1)
(t (fib n f1 (+ f0 f1) n0 n1 (+ cur 1))))
And now all our computations start at the base of 20 and 21, while all the logic stays exactly the same.
Further, I noticed on Wikipedia page about Fibonacci number that the definition provided by Paul Graham that equates to this sequence:
1, 1, 2, 3, 5, 8…
is a classic one. Wikipedia claims the more modern sequence is defined as:
0, 1, 1, 2, 3, 5, 8…
Notice the latter starts with 0 instead of 1. But this is trivial for us. Similar to how we changed the base, instead of giving the default result of 1 for f0, we’ll give 0:
(cond ((eql n n0) f0)
((eql n n1) f1)
((eql cur n) f1)
(t (fib n f1 (+ f0 f1) n0 n1 (+ cur 1))))
And now our computations start from 0, because we like to be modern, of course.
Feel free to attempt to make similar changes to the iterative version and see how it turns out. Can you make changes this simply? Can you reason about your logic with the same clarity? Can you isolate your logic – the problem you are actually solving – better from the overhead of the imperative style? This is why functional style recursion is more practical and makes more sense.
Also, keep in mind I took an example from the book that claimed this to be one of the very few cases where loops were superior. I believe I demonstrated they are not, even in this rare case. This should give you additional hints as to almost every other case where you use loops.
There’s more to recursion than this. Consider that your environment should support tail call optimization before seriously using it – this is a technique that prevents stack frames getting added to the stack with each recursive call. You should also learn how to write tail-recursive functions. Furthermore, if your environment or libraries provide tools for higher order functions, learn how to make use of them. Common functions such as map and fold (aka reduce) provide even easier and more expressive ways to define your logic.
Finally, this is not, by any means, meant as a tool to crusade against loops. I just wanted to demonstrate to myself for sanity’s sake, and others that might stumble upon this post, that when programming in a high level language, functional style recursion almost always makes more sense than imperative loops.
1 thought on “Loops vs Recursion: a LISP example”