The "right" way to generate Fibonacci numbers is not via the closed form, but by exponentiation of the matrix A = [0 1; 1 1] that defines Fibonacci numbers as an element of its endomorphism ring Z[x]/(x^2 - x - 1). This doesn't rely on explicitly computing the eigenvalues of A and scales well to more complex recursions. Sample Haskell:
fib :: Integer -> Integer
fib n = fst . loop (0, 1) base . abs $ n
where pmul (a, b) (c, d) = ((a + b)*(c + d) - b*d, b*d + a*c)
base = if n >= 0 then (1, 0) else (1, -1)
loop acc sq m
| m == 0 = acc
| m `mod` 2 == 0 = loop acc (pmul sq sq) (m `div` 2)
| otherwise = loop (pmul sq acc) (pmul sq sq) (m `div` 2)
Indeed, this is covered in one of the references in my article, which didn't actually approach the issue of the "best" or "right" way to calculate the numbers.
def fib(n):
a,b,x,y = 0,1,0,1
while n != 0:
if n % 2 == 0:
a,b = a * a + b * b, 2 * a * b + b * b
n = n // 2
else:
x,y = a * x + b * y, b * x + a * y + b * y
n -= 1
return x
Could you expand "defines Fibonacci numbers as an element of its endomorphism ring Z[x]/(x^2 - x - 1)" in a few more words? Are you saying that Fibonacci numbers are themselves an element of its endomorphism ring (despite not obviously being a morphism)? What does "Z[x]" mean (I have a feeling it is mathematical notation forced into plain text)?
This is a very important rebuttal to the “The Mathematical Hacker” article because it shows Maths is useful but should not be taken too literally. Even Knuth uses assembly language of a hypothetical machine in his Art of Computer Programming books because any discussion on optimisation is incomplete without the context of which machine we are talking about.
The right way to understand the behaviour of computations on a real machine is inherently mathematical. This is why, for example, we have an entire discipline of numerical analysis, just for starters.
Programmers very commonly make elementary mistakes in computation precisely because they don't understand this, and don't approach the problems mathematically. Ignoring overflow, or pretending floats have infinite precision is antithetical to thinking about these problems mathematically.
Conversely, any mathematician worth their salt will approach computation on a computer not as a black box, but as a concrete implementation. Mathematicians make lots of programming errors (some of them quite egregious and easily avoidable). Misunderstanding floating point representations or the overflow behaviour don't tend to be among them in my experience.
By the way: Knuth used a hypothetical machine to avoid the quirks of a concrete one distracting from the discussion.
[+] [-] sbi|12 years ago|reply
[+] [-] leephillips|12 years ago|reply
[+] [-] ChristianMarks|12 years ago|reply
[+] [-] s-phi-nl|12 years ago|reply
Thanks.
[+] [-] nefreat|12 years ago|reply
In scheme:
[+] [-] pmiller2|12 years ago|reply
See my ancient post http://pythonista.wordpress.com/2008/07/03/pure-python-fibon...
Pretty graph here: http://stevekrenzel.com/articles/fibonacci
[+] [-] arocks|12 years ago|reply
[+] [-] ska|12 years ago|reply
The right way to understand the behaviour of computations on a real machine is inherently mathematical. This is why, for example, we have an entire discipline of numerical analysis, just for starters.
Programmers very commonly make elementary mistakes in computation precisely because they don't understand this, and don't approach the problems mathematically. Ignoring overflow, or pretending floats have infinite precision is antithetical to thinking about these problems mathematically.
Conversely, any mathematician worth their salt will approach computation on a computer not as a black box, but as a concrete implementation. Mathematicians make lots of programming errors (some of them quite egregious and easily avoidable). Misunderstanding floating point representations or the overflow behaviour don't tend to be among them in my experience.
By the way: Knuth used a hypothetical machine to avoid the quirks of a concrete one distracting from the discussion.