(no title)
chameco | 9 years ago
Additionally, it makes the transition to lambda calculus more difficult to grok, which somewhat hinders understanding. It's pretty trivial to make some simple rewrite rules from (most of) Scheme to lambda calculus: it's a two-hour project at most. Doing this for CL is much less intuitive/elegant, possibly making it more difficult for those with that sort of theoretical background.
netsettler|9 years ago
One way I sometimes conceive it is that in any given language there are a certain number of small expressions and a certain number of large ones. Differences in semantics don't make things non-computable (which is why turing equivalance is boring) but they change which expressions will be easily reachable. There are certain things Scheme wants to be able to say in not many characters and different things CL does. Neither is a flawed design. But they satisfy different needs. It's possible to dive into either and be fine. As others have pointed out here, it's not as big a deal in practice as it seems like in theory. What matters in practice is to have an intelligible and usable design, which both languages do. But to assume that the optimal way to say something in one language should stay constant even if you change the syntax and semantics of the language is to not understand why you would want to change the syntax and semantics of the language.
kazinator|9 years ago
You cannot say with a straight face that "all elements of a form are evaluated equally and then the rest values are applied as args to the first value" because the counterexample (let ((a 42) a) doesn't work that way.
The Lisp-1 has to treat the leftmost position specially to determine whether let is a macro to be expanded or a special operator. That will not happen in a form like this (list 3 let 4).
In the TXR Lisp dialect (which provides Lisp-1 and Lisp-2 style evaluation), I fix this. In the Lisp-1 style forms, macros are not allowed. So [let ((a 3)) a] is a priori nonsense. The let symbol's position is evaluated exactly like the other positions, without being considered a macro (other than a symbol macro, which all the other positions may be).
The combination of Lisp-2 and Lisp-1 in one dialect let me have a cleaner, purer Lisp-1 in which that half-truth about all positions of a Lisp-1 form being equally evaluated is literally true, always.
Lisp-2 for macros and special ops, Lisp-1 for HOF pushing: beautiful. (list list) works, no funcall anywhere, and even if let is a variable that holds a function [let 42] call the damn thing:
The let operator is not shadowed: Basically, as far as I'm concerned, this whole Lisp-1 versus Lisp-2 squabbling is an obsolete debate and solved problem (by me).netsettler|9 years ago
But also, there was a very interesting proposal to ISO that did not survive in which variable number of arguments were handled by an alternate namespace with very specific operations that were understandable to a compiler. You could promote them to the regular namespace if you needed to, but the compiler could do nice things with the stack if you kept them in their more limited arena where it could figure out what you were meaning to do with them. That proposal got voted down, and I was one who didn't like it, but I came to think it was less of a bad idea than I had thought when I saw some of the confusions that came up with managing rest lists on the stack in CL, which are very hard to manipulate and know for sure when they need copying and when not. First class implies that there's probably a halting problem in the most general case of a compiler trying to do code analysis on what you're doing with a thing. Often compilers can recognize sufficient idioms that this doesn't come up in practice, but second class spaces can lead you in certain ways to do things that are better.
Each paradigm has its value.