> There was no latency, or waiting for the language at all. I could live in a running Lisp image and iteratively define my programs from the inside out, or in whatever way I wanted to, with immediate feedback after each change.
I am becoming more and more convinced that developer latency is the most important metric for being able to deliver value to your customer (broadly defined).
How fast does you code compile? How long for test to run? How long to deploy to an integration environment? To test environment with real users? To production?
Lisp in some ways is still unmatched in this metric, for the kinds of reasons cited by the author.
I have experienced this at work. We were working on a somewhat complex embedded system and seeing some unusual performance corner-cases. Everyone had a different theory for what was causing the problem.
So I fired up SLIME and wrote a high-level simulator for the system, starting simple and adding in the complexities. Once we were seeing the same performance profile as the embedded system, I stopped, and we now had a malleable system. Then I spend the next 15 minutes implementing each of the recommended performance fixed individually, and we found one that worked in the simulation and then implemented that one in the real environment (it took almost 15 minutes just to update the program on the embedded target, and much longer to run a full test-case).
Making it so you can check your guesses faster means you need not be as good at guessing. I should put in one additional warning though: don't fall into the trap of "make random changes until it looks like it works" and call it done. At this point we had only demonstrated that we knew where the issue was; the final fix was significantly different than the "quick and dirty" fix suggested by the simulation. The simulation I wrote definitely saved weeks
(if not months) of time because 90% of the team was convinced that the issue was in a different part of the system than it ended up being in. We found the issue only because "It will take less than a minute, so what's the harm in trying?"
> How fast does you code compile? How long for test to run? How long to deploy to an integration environment? To test environment with real users? To production?
I'm working on ultra-fast iteration development methods for C++ with https://www.youtube.com/watch?v=fMQvsqTDm3k and yes, it makes productivity absolutely unmatched compared to everything I experienced before.
Preferences seem to matter here, some people just seem to do better with longer cycles of feedback on more items than shorter cycles on fewer items. And some people really may need that "my code is compiling, it'll take a while and there's nothing I can do" break to mock-sword fight or whatever, almost like a pomodoro break between periods of intense focus. But there is some data on what could be, and you can of course measure your own latency on things.
More narrowly defined, in the Java world JRebel is a product that can eliminate many kinds of forced redeploys after code changes -- or more simply, stopping and restarting your program and waiting for everything to come back up and reconfigure any necessary state to get back to what you were doing. I saved so much time thanks to it. Their own marketing materials suggest just this waiting on redeploys thing can quickly add to 20% of salaried time. (https://www.jrebel.com/sites/rebel/files/pdfs/rw-mythbusters...) Using it almost gets within javelin distance of what CL has always had, and gives you the flexibility to develop and debug in a more interactive style without having to go TDD or rewrite in ABCL.
I haven't been keeping up (so would love to be informed things are otherwise) but I thought it was a shame that the JS front-end world seemed to finally be creeping up on where ClojureScript was in 2014 or so for interactive low-latency development, but then suddenly took several steps backwards with TypeScript -- with respect to interactivity at least, I don't doubt people's claims of steps forward on other metrics.
> I am becoming more and more convinced that developer latency is the most important metric for being able to deliver value to your customer (broadly defined).
This is generally true. Tighter feedback loops lead to greater responsiveness. You just have to make sure you don't get overwhelmed by them, too (feedback faster than you can respond to it).
I do mostly lisp these days, but your question brings me back to work I did with Smalltalk, one of the better integrated environments. It was so responsive and had effectively no compile cycle. At the end of an 8 hour day, i would be exhausted, as there was effectively no down time. (No time for sword play.)
I have been mostly a Common Lisp developer since 1982 (probably 30% professional work, the rest of my paid for time split between Java, C++, Python, Prolog, and Ruby).
About three years ago I was very enthusiastic about Julia becoming my primary language because so much of my work was in deep learning. I created a small repo of examples using Julia for non-numeric stuff like string processing, SPARQL query client, etc. Julia was pretty good all around.
What kept me from adopting Julia was realizing that Python had so many more libraries and frameworks. Right now, I split my time between Common Lisp and Python.
This is where I am stuck (albeit with drastically fewers years of experience). I want to make every piece of code in our pipelines differentiable. But ramping people up to Julia, not to mention creating and maintaining specialty libraries for in-house use, is too much just for me. But we are a small shop and no one around feels like learning a new technology.
I can see needing Python instead of Julia for the libraries, but what about using Julia for new work that you'd normally do in Common Lisp, for the reasons sited in the source article?
I was looking for a new data notebook toy to ease the tedium of reporting some metrics.
I found the clojure-based https://github.com/nextjournal/clerk which I like the look of and remember clojure being rather pleasing interactively.
That lisp and Julia might have some commonality made me look for something similar (that isn't jupyter) for Julia. Seems like https://github.com/fonsp/Pluto.jl is such a thing.
Does anyone have any experience of either they could share?
So far I have little experience with Julia, but it looks to me very promising. Personally, I like a more settled environment and prefer languages defined by a standard, not by a single implementation.
"Infact, the first version of my most recent project saw an order of magnitude difference when ported from Common Lisp to Julia, without any effort or attention to optimization techniques."
That might have been the case, but one mustn't expect that generally to be so, see e.g. https://benchmarksgame-team.pages.debian.net/benchmarksgame/... which shows much closer results (and multiple attempts for either language for any given benchmark to get better results).
Seems that Julia struggles with the binary-tree benchmark. Then again, run-time performance is for many (and over time, more and more) applications not the most relevant criteria (otherwise Python wouldn't be so successful).
That benchmark in particular has different rules for GC languages vs non-GC languages. Absurd, but true. Non-GC languages are allowed to pre-allocated a memory arena, GC languages are not. For the naive implementation, Julia beats Rust in this benchmark.
My personal experience tinkering with Lisp is that it can sometimes be very hard to actually achieve the legendary high performance of SBCL, without your code getting verbose and ugly. Yes, you could ostensibly wrap that ugliness up in macros, but then you're inventing your own DSL and a compiler in the macro engine, which I guess is what the true Lisp nerds live for, but I have too few brain cells and too little time/energy to shave such big yaks.
On the flipside, I was thinking of learning CL as a Julia developer and this post has somewhat discouraged me to do so. What can learning CL do for me apart from realising that S-expression syntax is superior?
For context, I very often find myself building small libraries from scratch to solve very specific scientific problems which is somewhat performance critical. Julia has worked very well for me for this, but I recognise that moving outside of your comfort zone is the best way to become a better programmer.
It's not that S-expression syntax is superficial better or worse, it's that S-expression syntax acts as a vehicle for linguistic abstraction. What I mean by that is that Lisp lets you seamlessly integrate new constructs into your programming environment that didn't previously exist. Lisp doesn't have a "parallel for-loop"? Well, it's easy to add one in a few lines of code.
That, combined with the unrelenting support for interactive and incremental development, make Lisp at least a novel experience, and hopefully a transformative one.
Lisp has a readability issue, where the syntax makes the authors intent not immediately clear with all the brackets. I think it shared too many of the issues RPL and Forth had to be sustainable outside academia.
When it comes to implicit parallel code, Julia hid a lot of the gruesome details in a friendly interface similar to Octave, Matlab, and Python. However, what surprised me most of all was its math ops are often still faster on a CPU than Numpy and many native C libs.
Julia still has a long way to go with its package version compatibility, but it is slowly improving as the core implementation stabilizes.
Like any language, Lisp has readability issues when you’re not familiar with it. You find it hard to read because it’s alien to you. To my eyes – because Lisp is what I’m used to – every non-Lisp language has readability issues. These days I do a significant amount of work in Python, Rust and Julia, and although there are things I love in all of these languages, I always miss s-expression syntax when using them.
[+] [-] huijzer|3 years ago|reply
[+] [-] jimbokun|3 years ago|reply
I am becoming more and more convinced that developer latency is the most important metric for being able to deliver value to your customer (broadly defined).
How fast does you code compile? How long for test to run? How long to deploy to an integration environment? To test environment with real users? To production?
Lisp in some ways is still unmatched in this metric, for the kinds of reasons cited by the author.
[+] [-] aidenn0|3 years ago|reply
So I fired up SLIME and wrote a high-level simulator for the system, starting simple and adding in the complexities. Once we were seeing the same performance profile as the embedded system, I stopped, and we now had a malleable system. Then I spend the next 15 minutes implementing each of the recommended performance fixed individually, and we found one that worked in the simulation and then implemented that one in the real environment (it took almost 15 minutes just to update the program on the embedded target, and much longer to run a full test-case).
Making it so you can check your guesses faster means you need not be as good at guessing. I should put in one additional warning though: don't fall into the trap of "make random changes until it looks like it works" and call it done. At this point we had only demonstrated that we knew where the issue was; the final fix was significantly different than the "quick and dirty" fix suggested by the simulation. The simulation I wrote definitely saved weeks (if not months) of time because 90% of the team was convinced that the issue was in a different part of the system than it ended up being in. We found the issue only because "It will take less than a minute, so what's the harm in trying?"
[+] [-] jcelerier|3 years ago|reply
I'm working on ultra-fast iteration development methods for C++ with https://www.youtube.com/watch?v=fMQvsqTDm3k and yes, it makes productivity absolutely unmatched compared to everything I experienced before.
[+] [-] Jach|3 years ago|reply
More narrowly defined, in the Java world JRebel is a product that can eliminate many kinds of forced redeploys after code changes -- or more simply, stopping and restarting your program and waiting for everything to come back up and reconfigure any necessary state to get back to what you were doing. I saved so much time thanks to it. Their own marketing materials suggest just this waiting on redeploys thing can quickly add to 20% of salaried time. (https://www.jrebel.com/sites/rebel/files/pdfs/rw-mythbusters...) Using it almost gets within javelin distance of what CL has always had, and gives you the flexibility to develop and debug in a more interactive style without having to go TDD or rewrite in ABCL.
I haven't been keeping up (so would love to be informed things are otherwise) but I thought it was a shame that the JS front-end world seemed to finally be creeping up on where ClojureScript was in 2014 or so for interactive low-latency development, but then suddenly took several steps backwards with TypeScript -- with respect to interactivity at least, I don't doubt people's claims of steps forward on other metrics.
[+] [-] Jtsummers|3 years ago|reply
This is generally true. Tighter feedback loops lead to greater responsiveness. You just have to make sure you don't get overwhelmed by them, too (feedback faster than you can respond to it).
[+] [-] wglb|3 years ago|reply
[+] [-] mark_l_watson|3 years ago|reply
About three years ago I was very enthusiastic about Julia becoming my primary language because so much of my work was in deep learning. I created a small repo of examples using Julia for non-numeric stuff like string processing, SPARQL query client, etc. Julia was pretty good all around.
What kept me from adopting Julia was realizing that Python had so many more libraries and frameworks. Right now, I split my time between Common Lisp and Python.
[+] [-] uoaei|3 years ago|reply
[+] [-] ralphc|3 years ago|reply
[+] [-] moelf|3 years ago|reply
[+] [-] nanna|3 years ago|reply
[+] [-] silasdavis|3 years ago|reply
I found the clojure-based https://github.com/nextjournal/clerk which I like the look of and remember clojure being rather pleasing interactively.
That lisp and Julia might have some commonality made me look for something similar (that isn't jupyter) for Julia. Seems like https://github.com/fonsp/Pluto.jl is such a thing.
Does anyone have any experience of either they could share?
[+] [-] casparvitch|3 years ago|reply
[+] [-] medo-bear|3 years ago|reply
Why not: from Common Lisp to Julia
https://gist.github.com/digikar99/24decb414ddfa15a220b27f674...
[+] [-] guenthert|3 years ago|reply
"Infact, the first version of my most recent project saw an order of magnitude difference when ported from Common Lisp to Julia, without any effort or attention to optimization techniques."
That might have been the case, but one mustn't expect that generally to be so, see e.g. https://benchmarksgame-team.pages.debian.net/benchmarksgame/... which shows much closer results (and multiple attempts for either language for any given benchmark to get better results).
Seems that Julia struggles with the binary-tree benchmark. Then again, run-time performance is for many (and over time, more and more) applications not the most relevant criteria (otherwise Python wouldn't be so successful).
[+] [-] jakobnissen|3 years ago|reply
[+] [-] nerdponx|3 years ago|reply
[+] [-] moelf|3 years ago|reply
any of the IronPython, Cython, Jython worked well enough to be continued and works well with the rich CPython ecosystem 100%?
[+] [-] twobitshifter|3 years ago|reply
[+] [-] thetwentyone|3 years ago|reply
[+] [-] europeanguy|3 years ago|reply
[+] [-] martinsmit|3 years ago|reply
For context, I very often find myself building small libraries from scratch to solve very specific scientific problems which is somewhat performance critical. Julia has worked very well for me for this, but I recognise that moving outside of your comfort zone is the best way to become a better programmer.
[+] [-] reikonomusha|3 years ago|reply
That, combined with the unrelenting support for interactive and incremental development, make Lisp at least a novel experience, and hopefully a transformative one.
[+] [-] Joel_Mckay|3 years ago|reply
When it comes to implicit parallel code, Julia hid a lot of the gruesome details in a friendly interface similar to Octave, Matlab, and Python. However, what surprised me most of all was its math ops are often still faster on a CPU than Numpy and many native C libs.
Julia still has a long way to go with its package version compatibility, but it is slowly improving as the core implementation stabilizes.
[+] [-] wiseowise|3 years ago|reply
No it doesn’t, that’s like saying that any non-English language has readability issue.
[+] [-] Oreb|3 years ago|reply
[+] [-] pjmlp|3 years ago|reply
And focusing on (), instead of ()[]{}:;.
[+] [-] unknown|3 years ago|reply
[deleted]