top | item 44359454

A Lisp adventure on the calm waters of the dead C (2021)

83 points| caned | 8 months ago |mihaiolteanu.me

32 comments

order

fifticon|8 months ago

I see mixed comments, so let me add some praise. I am one of countless, who match his intro-filter: repeatedly hearing 'enlightened' people lament that the vast masses don't "get" lisp and FP, and repeatedly attempting/failing to pick up the red string myself.

background - I am a computer science major with 30+ years experience. I did do a mandatory class of 'implement your own lisp' many eons ago. It just never really 'clicked' for me. I do, by accident, assimilation and lazyness,employ FP style designs in my software. And I guess fp techniques gradually rub off on me from e.g. javascript, lambdas,closures, and map-filter-reduce. in particular, lambdas are useful to me. But I am one of the guys who continue to read the "let me tell you what monads really are", and every time I fall off the bicycle. So, well, I appreciated this 'Xfor 5year olds" :-)

msla|8 months ago

Lisp is "functional" in a 1970s sense in that it has functions as first-class objects you can pass as parameters to other functions, but those functions are basically subroutines which can have side-effects and un-functional behavior. As you allude to, this is pretty much par for the course in procedural languages now, and OO As She Is Spoke is procedural with some extra stuff added. Even garbage collection is common enough now that languages which don't have it trumpet the fact and make it their whole personality. Lisp is heady and revolutionary if your baseline is FORTRAN and maybe C, in other words, unless you actually do begin to write your own macros, at which point the C people begin to look at you funny.

Haskell is functional in that it demands its functions be functions, not subroutines. A function has inputs mapped to outputs and no side-effects. Functions can be composed and composition always works. Haskell uses monads to represent the regrettable fact that having an impact on the outside world is, in a very real sense, a side-effect, so it marks all side-effecting functions with an indelible stain. Haskell requires a different mode of thought from Python, or even from C++, and it's definitely not another Lisp.

jmkr|8 months ago

I think Lisp is more on the liberal arts side of programming languages.

That the "enlightenment" of Lisp is that you can use functions everywhere. Write macros that look like functions and modify behavior, and build your code as a language.

Things like monads are more on the evolution of functional languages, and I also fall off the bike. It's as difficult as you want it to be, and I find scheme and lisp to be easier high level languages than javascript or python and makes more sense.

The forward and preface to SICP is good reading.

https://mitp-content-server.mit.edu/books/content/sectbyfn/b...

The Dan Friedman books are pretty good in general: "The Little Schemer," and the sequel "The Seasoned Schemer" which are both more "recursion" books. He also has another book "Scheme and the Art of Programming." Which I think is a great comp sci book that's not too difficult and doesn't seem too well known.

How to Design Programs is supposed to be a pretty good comp sci intro:

https://htdp.org/2024-11-6/Book/index.html

baq|8 months ago

my epiphany with lisp was that it is not a functional language in the modern sense, i.e. mutability is fine, loops are fine, etc. it's primarily a list processor, not lambda calculus.

dawnofdusk|8 months ago

The author of this blog is clearly eloquent and, as per their interspersed quotations of David Hume and others, it is refreshing to see someone so well-read in the software/tech blogosphere.

I love Lisp. The last few paragraphs are a pretty good description. It's nice to have a very flexible set of tools, instead of being forced to conform to object-oriented design or whatever paradigm. IMO the only legitimate reason in sticking steadfast to a design paradigm is for performance reasons, but of course this can only really justify array programming/imperative programming. But at the point where you want some flexible abstractions, it's nice to have the power to do introspection, delayed evaluation, and so on. Disclaimer: my background is physics/math, so function abstractions are much more intuitive to me than objects, or whatever other structures are taught to CS students.

kazinator|8 months ago

The author may be eloquent, but unfortunately calls imperative operators, like while, "functions".

lproven|8 months ago

Previously:

https://news.ycombinator.com/item?id=28851992

https://news.ycombinator.com/item?id=44359454

No comments on any of them.

It sounded of interest to me, but I read it and closed the tab within a page or so as it wandered off into tech arcana. Shame. There may be an interesting idea in here but it's phrased in terms I think few will be able to follow and understand.

I did not finish it but I saw no mention of the lambda calculus or of currying, both of which -- from my very meagre understanding -- seem directly relevant to what I understood to be the core point, which seems to be about anonymous functions.

Jach|8 months ago

I don't think you're missing much. Yeah, the main point seems to be that if your language has closures, you suddenly can express a lot of things that were out of reach before. Not a new insight. But there's another point I think that is hinted at on the topic of control abstractions. Or at least I'm reminded of the topic. It's better and more succinctly and explicitly talked about in an early chapter of the free book Patterns of Software: https://dreamsongs.com/Files/PatternsOfSoftware.pdf

The extra point might be that more languages should facilitate defining your own control abstractions just as they support defining your own data abstractions. Functions are one way of making data abstractions, but languages often provide multiple ways. Closures are one way of doing a type of control abstraction (involving such things as delayed or multiple evaluation), but there are other ways too. For some reason we see value and a need for defining our own data abstractions, but not so much for control abstractions, even though (according to the book) once they were often co-designed, like Fortran's arrays and DO loop. And for some reason even in the few languages that do support making your own control abstractions, like Lisp, you'll still find users who disapprove of doing so, claiming all you need are the standard existing methods like looping, map/reduce style functions, and some non-local exits.

dreamcompiler|8 months ago

Currying is done automatically in Haskell but not in Lisp. If you wanted currying in Lisp you could write it, but Lisp programmers don't depend on or talk about currying as much as Haskell programmers do.

tmtvl|8 months ago

The core point, to me, seemed to be about limiting factors in language extension. To allow something like:

  my_if (points <= 100, printf ("%D", points), error ("Invalid point total"));
Where the various parameters are lazily evaluated. Or like:

  frobnicate (frazzle: foo, frozzle: bar, frizzle: baz);
Where frazzle, frozzle, and frizzle are position-independent keyword variables.

Allowing those in C would require a modicum of effort, while other languages make these kinds of syntax extension fairly easy.

mrbluecoat|8 months ago

At least "the dead C" was a nice pun :D

int_19h|8 months ago

Coincidentally R is one language in which `if` and `while` can be written as functions, because all function arguments are lazily evaluated, and one can get access to the underlying lambda for repeated re-evaluation. In fact, `if` and `while` are functions in R, and you can call them as such if you properly quote the keyword so that it's treated as an identifier. And then the familiar C-style syntactic forms are just syntactic sugar for function calls.

R takes it up a notch though by making all syntactic constructs boil down to a function call. Function definitions are themselves calls, for example, and so are assignments and even curly braces.

deterministic|8 months ago

There seems to be a lot of "Lisp is better than C/C++/..." articles around. I wonder why?

If the purpose is to try and convince non-Lispers to use Lisp, a more convincing argument (for me at least) would be to demonstrate modern commercial software written faster and more bug free in Lisp.

For example: "Here is a modern biz web application written in Lisp" showing step by step how Lisp makes the development process faster and less buggy than implementing the same application using (say) Typescript/C++.

Notes: I use custom code generators to generate more than 90% of the Typescript/C++ code needed to implement biz applications. Leaving only the core biz logic. So macros for code generation doesn't really give me anything I don't have already. And using macros for defining my own DSL's within the language would just makes the code unreadable for other developers. So it is not a feature I actually want.

timewizard|8 months ago

    typedef int fn_t(int, int);

    int iff(bool cond, fn_t a, fn_t b) {
        if (cond)
            return(a());
        else
            return(b());
    }
Now just write the implementation in terms of a() and b(). I don't get it. C doesn't have convenient syntax but this is compiled and not an evaluated language. This argument didn't make sense to me.

1718627440|8 months ago

I have the same issue with this article. I think it actually complains about the lack of inline anonymous function declarations.

The fact that you can't create functions at runtime is only due to needing a compiler. If you link your program against a compiler you can absolutely turn strings into code at runtime and then pass it around to other functions.