This delays the binding of the handler functions (zero and succ) until each step instead of deciding once way up at the top.
Addition is easy:
\add = (\x\y x y \p succ (add p y))
As is multiplication:
\mul = (\x\y x 0 \p add y (mul p y))
Now unary notation is fine for some things, but when I'm using arbitrarily large numbers in Fexl I use a binary notation (lists of bits), with representations as follows:
[] = 0
[F|N] = 2*N (where N != 0)
[T|N] = 2*N+1
In that notation addition is a little harder but not too bad:
\add = (\x\y x y \xb\xn y x \yb\yn
\z=(add xn yn)
xb
(yb [F,T|z] [T|z])
(yb [T|z] [F|z]))
Multiplication is:
\twice = (\x x [] \_\_ [F|x])
\mul = (\x\y x y \xb\xn y x \yb\yn
\z=(twice; mul xn yn)
xb
(yb [T|add xn; add yn z] [F|add yn z])
(yb [F|add xn z] [F|z]))
[The ';' you see there is the Fexl "right-pivot" operator, a syntactic device for avoiding right-nesting. For example, (add 1; add 2; add 3; 0) is a shorthand for (add 1 (add 2 (add 3 0))). Also the notation [x|y] is a shorthand for (cons x y), where cons = (\head\tail \null\cons cons head tail).]
You can also harness the full power of the computable numbers, which is how you do the so-called "real" numbers on a physical computing device. A computable number is a pair of lists of bits (T or F). One list is the integer part and the other is the fractional part. The fractional part can be infinite, allowing exact representation of numbers like pi and e. (Come to think of it you could have an infinite integer part as well, which might actually be useful somehow.)
If you're careful to avoid non-termination you can do useful work with lazily evaluated computable numbers. Something like (add pi e) will work just fine and give you an exact result to any number of places you desire. On the other hand (equal pi pi) will loop forever, searching futilely for two bits which disagree. But when you're looking for "close enough" you can just apply (clip 20) to the numbers and then compare them.
When building a compiler, I have my students use Church encodings to eliminate many constructs at first. As we learn how to handle each construct, we steadily drop the Church encodings and performance improves.
[+] [-] fexl|16 years ago|reply
Addition is easy:
As is multiplication: Now unary notation is fine for some things, but when I'm using arbitrarily large numbers in Fexl I use a binary notation (lists of bits), with representations as follows: In that notation addition is a little harder but not too bad: Multiplication is: [The ';' you see there is the Fexl "right-pivot" operator, a syntactic device for avoiding right-nesting. For example, (add 1; add 2; add 3; 0) is a shorthand for (add 1 (add 2 (add 3 0))). Also the notation [x|y] is a shorthand for (cons x y), where cons = (\head\tail \null\cons cons head tail).]You can also harness the full power of the computable numbers, which is how you do the so-called "real" numbers on a physical computing device. A computable number is a pair of lists of bits (T or F). One list is the integer part and the other is the fractional part. The fractional part can be infinite, allowing exact representation of numbers like pi and e. (Come to think of it you could have an infinite integer part as well, which might actually be useful somehow.)
If you're careful to avoid non-termination you can do useful work with lazily evaluated computable numbers. Something like (add pi e) will work just fine and give you an exact result to any number of places you desire. On the other hand (equal pi pi) will loop forever, searching futilely for two bits which disagree. But when you're looking for "close enough" you can just apply (clip 20) to the numbers and then compare them.
[+] [-] fadmmatt|16 years ago|reply
http://matt.might.net/articles/church-encodings-demo-in-sche...
When building a compiler, I have my students use Church encodings to eliminate many constructs at first. As we learn how to handle each construct, we steadily drop the Church encodings and performance improves.