That's a nice one, albeit several simplifications are made in there (e.g. from a quick glance I could not find anything dealing with scopes [the prefixing with "_V_" for variables might probably hint at its non-existence]).
Nevertheless, impressive work from one of the most interesting projects around.
This makes me vocalize something Ive wondered about...
Whats the smallest subset of lisp which you need to implement in order to bootstrap the language? So, given this small subset, one can implement the rest of say scheme or common lisp or arc on top of.
I vaguely recall Sussman or Ebelson implementing car and cadr in terms of lambdas, for example.
It seems this should be quite modular, given so many implementations of lisp [ in javascript, PHP, etc now Ruby ]. So that new lisps might be easily brought up over this kernel.
That only uses 'setq, 'lambda, 'if, 'eq, and quoted symbols. That pretty much gives you the untyped lambda calculus, plus some side effects. Throw in the appropriate macros and you've got the core of Scheme and Arc, without any I/O or efficient data representations.
You can take this pretty far. Essentially, you just need 'lambda and application. You could use church numerals for numbers, monads for side effects, the y-combinator for recursion, etc.
Pair-structured memory, a (very simple) recursive-descent parser, eval, apply, read, print, a few mathematical primitives, not too much else. Once you have the bare bones, _The Art of the Interpreter_ provides a good skeleton. Don't worry about a garbage collector yet - either use a language with one (Scheme, Python, Lua, Ruby, OCaml, etc.) or just let it leak memory for now. (Garbage collectors aren't deep magic either, but one thing at a time, you know?)
Implementing a toy Lisp is absolutely worthwhile. It shouldn't take too long* , and it will teach you several deep things. Don't worry about making it efficient until you have it together enough to be useful. (If you want a mini language that's mature, just use Lua.) This is about discovery. Many interpreters for mini-languages don't need to be efficient, anyway. Who writes 20,k-long files in a DSL?
* Though if you've never done an interpreter at all, a half-assed Forth is even simpler. Either way, implementing half-assed Lisps, Prologs, Js, etc. is a great way to feel out a language. Worrying about getting it efficient upfront can be a red herring.
Well, lambda calculus is Turing complete, so you could just stick with a call-by-value version of that. However, you'd want to add macros if you wanted to extend the syntax of the language. Since you mentioned car and cdr in lambdas, they'd be defined like this:
What you are looking for is answered in "The Art of the Interpreter, the Modularity Complex (Part Zero, One, and Two)", Steele and Sussman, MIT AI Lab Memo 453, May 1978.
More LISP details can be obtained by the book "LISP in Small Pieces" by Christian Queinnec (1996), and an excellent description of the eval/apply functions can be found in "The Architecture of Symbolic Computers" by Peter M. Kogge, 1990.
Btw: to satisfy my more or less latent pedantism: The second SICP author is called Abelson not Ebelson.
Taking a specific example, "Lisp in 100 lines of Ruby" means that once you have Ruby, you have Lisp. That means that anything you can do in Lisp can be done in Ruby, and hence Ruby is at least as powerful. That means:
Greenspun's Tenth Rule of Programming: "Any sufficiently complicated C or Fortran program contains an ad-hoc, informally-specified bug-ridden slow implementation of half of Common Lisp."
http://philip.greenspun.com/research/
[+] [-] lisper|16 years ago|reply
http://www.flownet.com/ron/lisp/l.py
[+] [-] mahmud|16 years ago|reply
[+] [-] MaysonL|16 years ago|reply
examples given which run about 70% of the speed of equivalent C code.
[+] [-] sb|16 years ago|reply
Nevertheless, impressive work from one of the most interesting projects around.
[+] [-] gord|16 years ago|reply
Whats the smallest subset of lisp which you need to implement in order to bootstrap the language? So, given this small subset, one can implement the rest of say scheme or common lisp or arc on top of.
I vaguely recall Sussman or Ebelson implementing car and cadr in terms of lambdas, for example.
It seems this should be quite modular, given so many implementations of lisp [ in javascript, PHP, etc now Ruby ]. So that new lisps might be easily brought up over this kernel.
[+] [-] fadmmatt|16 years ago|reply
exp ::= var | (lambda (var) exp) | (exp exp)
Check out
http://matt.might.net/articles/church-encodings-demo-in-sche...
To see how to build numbers, booleans, lists, conditionals and recursion out of pure lambdas.
Even cooler is that these tricks work in languages like JavaScript too:
http://matt.might.net/articles/implementation-of-recursive-f...
[+] [-] twilightsentry|16 years ago|reply
You can take this pretty far. Essentially, you just need 'lambda and application. You could use church numerals for numbers, monads for side effects, the y-combinator for recursion, etc.
[+] [-] silentbicycle|16 years ago|reply
Pair-structured memory, a (very simple) recursive-descent parser, eval, apply, read, print, a few mathematical primitives, not too much else. Once you have the bare bones, _The Art of the Interpreter_ provides a good skeleton. Don't worry about a garbage collector yet - either use a language with one (Scheme, Python, Lua, Ruby, OCaml, etc.) or just let it leak memory for now. (Garbage collectors aren't deep magic either, but one thing at a time, you know?)
Implementing a toy Lisp is absolutely worthwhile. It shouldn't take too long* , and it will teach you several deep things. Don't worry about making it efficient until you have it together enough to be useful. (If you want a mini language that's mature, just use Lua.) This is about discovery. Many interpreters for mini-languages don't need to be efficient, anyway. Who writes 20,k-long files in a DSL?
* Though if you've never done an interpreter at all, a half-assed Forth is even simpler. Either way, implementing half-assed Lisps, Prologs, Js, etc. is a great way to feel out a language. Worrying about getting it efficient upfront can be a red herring.
[+] [-] mdemare|16 years ago|reply
Eighth paragraph: http://www.paulgraham.com/ilc03.html
[+] [-] procrastitron|16 years ago|reply
NIL = (lambda (x) (lambda (y) y))
CONS = (lambda (ar) (lambda (dr) (lambda (op) (op ar dr))))
CAR = (lambda (expr) (expr (lambda (ar dr) ar)))
CDR = (lambda (expr) (expr (lambda (ar dr) dr)))
[+] [-] sb|16 years ago|reply
http://library.readscheme.org/page1.html
What you are looking for is answered in "The Art of the Interpreter, the Modularity Complex (Part Zero, One, and Two)", Steele and Sussman, MIT AI Lab Memo 453, May 1978.
http://repository.readscheme.org/ftp/papers/ai-lab-pubs/AIM-...
More LISP details can be obtained by the book "LISP in Small Pieces" by Christian Queinnec (1996), and an excellent description of the eval/apply functions can be found in "The Architecture of Symbolic Computers" by Peter M. Kogge, 1990.
Btw: to satisfy my more or less latent pedantism: The second SICP author is called Abelson not Ebelson.
[+] [-] jacquesm|16 years ago|reply
http://www.faqs.org/faqs/lisp-faq/part1/section-6.html
[+] [-] statictype|16 years ago|reply
Not as elegant though.
[+] [-] bodhi|16 years ago|reply
[+] [-] stcredzero|16 years ago|reply
This is generally an indicator that language_power(X) > language_power(Y)
[+] [-] RiderOfGiraffes|16 years ago|reply
Taking a specific example, "Lisp in 100 lines of Ruby" means that once you have Ruby, you have Lisp. That means that anything you can do in Lisp can be done in Ruby, and hence Ruby is at least as powerful. That means:
X in (some small number) of lines of Y
implies
language_power(X) <= language_power(Y)
Exactly the opposite of what you said.
[+] [-] z8000|16 years ago|reply
[+] [-] kidko|16 years ago|reply
[+] [-] gaius|16 years ago|reply
[+] [-] mmphosis|16 years ago|reply
[+] [-] smithjchris|16 years ago|reply
[deleted]
[+] [-] Anon84|16 years ago|reply
[+] [-] tyrmored|16 years ago|reply