I will go against the trend here and give a big thanks to the whole Julia team for all their wonderful work.
I've been a heavy Julia user for +4 years and adore this ecosystem. I use Julia for parallel computing, modeling and solving large-scale optimization problems, stochastic simulations, etc. During the last year or so, creating plots and dashboards has become much easier too.
Julia makes it surprisingly easy to go from "idea" to "large-scale simulation". I've used it in production and just for prototyping/research. I can engage with Julia as deeply as I would with C code or as "lightly" as I engage with Matlab/R.
My favorite change (even though it's not listed in the changelog), is that just-in-time compiled code now has frame pointers[1], making Julia code much more debuggable. Profilers, debuggers, etc. all can now work out of the box.
Extra excited that the project I happen to work on (the Parca open source project[2]) influenced this change [3][4]. Shout out to Valentin Churavy for driving this on the Julia front!
>Matlab users should switch to Julia. [...] What prevents matlab users from switching? The syntax is similar.
Choosing a programming language based on just comparing the language syntax only works for academic settings or toy projects for self-curiosity and learning. Once you consider adopting a language for complicated real-world industry usage, you have to look beyond the syntax and compare ecosystem to ecosystem.
It has add-ons such as medical imaging toolkit, wireless communications (antenna signal modeling), etc. And MATLAB continues releasing new enhancements that the Julia ecosystem doesn't keep up with.
If one doesn't need any of the productivity tools that MATLAB provides, Julia becomes a more realistic choice.
Or to put it another way, companies didn't really "choose the MATLAB programming language". What they really did was choose the MATLAB visual IDE and toolkits -- which incidentally had the MATLAB programming language.
I don't know if you genuinely want feedback... But I'll share my very short experience. I tried Julia one time a few years back. I'll be honest, I didn't put in a lot of effort into (but nor will most potential Matlab converts - bc people are busy and have stuff to do)
It's got a frustrating "not fun" on-boarding. ie. the number of minutes from downloading "Julia" to getting cool satisfying results
1. It not a calculator on steroids like Matlab. It doesn't have one main open source IDE like Octave/Rstudio that you can drop in and play around in (see plots docs repl workspace)
2. The default language is more like a "proper programming language". To even make a basic plot you need to import one of a dozen plotting libraries (which requires learning how libraries and importing works - boring ..) and how is someone just getting started to decide which one..? I don't need that analysis paralysis when I'm just getting started
3. Documentation .. Well it's very hard to compete with Matlab here - but the website is not as confidence inducing. The landing page is a wall of text: https://docs.julialang.org/en/v1/ Tbh, from the subsequent manual listing it's not even clear it's a math-focused programming language . It's talking about constructors, data types, ffi, stack traces, networking etc etc.
> What prevents matlab users from switching? The syntax is similar.
I’ll offer my perspective. I’m switching little-by-little. Why don’t I just rip off the bandaid and switch whole-hog immediately? The literally twenty years of tools I’ve developed in MATLAB.
At work, I’m paid to perform analyses, develop algorithms, and document that work to my colleagues and our sponsors. I’m not paid to learn Julia. If I’m working on something completely novel where none of the MATLAB building blocks I’ve wrote over the years are useful, or the porting time is a small (for some arbitrary and subjective definition) of the overall task time, then I’ll do it in Julia.
The toolboxes aren’t a huge sticking point for me: Mathworks has only somewhat recently developed toolboxes geared toward my particular domain, so none of my code relies on them. My ability to share with colleagues is a bit of a sticking point. We’re predominantly a MATLAB shop (and this seems true not just at my company, but in our particular niche in industry). There has been some movement toward Python. But if it’s anything like the transition from Fortran to MATLAB — which was still on-going when I started in the early 2000s — then a full switch to either Python or Julia is still a ways off.
Syntax is one thing, but as others have mentioned it's more than that. Libraries, tooling, IDE and perhaps also knowing the ideosyncracies and pitfalls, and how to recover from them.
I'm very comfortable in Matlab and often know immediately what's wrong when I hit those oddities. In Python it usually means I spend considerable time googling and tinkering before I even understand what I did wrong because I have far less experience... Same when I tinker with Julia.
And for some people it being a Real Programming Language may be a disadvantage actually... Typically means you need a better understanding than just type-and-run.
I also did this and only the students that were well ahead of the rest gave it a try. But I think this is natural, many students are already pushed to the limit to manage the current set of studies. Learning a whole new language on top of that introduces a lot of extra risk and has an opportunity cost that might not make it feasible (unless you are already doing very well).
I don't think it is so much that students are lazy but more that the current way that the study plan is made in universities doesn't allow for much risk taking by the students. So they will just go the "cookie cutter" way.
Intentional or not, I enjoy how humorously provocative this comment is.
It's not possible to understand why someone would prefer Matlab through a software engineer's lense. Our jobs are different and a real programming language isn't anywhere on our list of needs or wants for matrix laboratory work. We actually prefer semantics like copy-on-write and features like dynamically adding fields to structs at runtime. It fits our workflow better, and that's ultimately what matters most.
I'm sure one day I'll add Julia to the list of real programming languages that I've used to write a library, but I'll still wrap it and call it from Matlab just like everything else.
What prevents it is the libraries. Just like Python.
Julia as a compiled language is faster and more distributable than either, but there is a chicken-egg problem about the ecosystem. MathWorks' provided libraries for Matlab are excellent, amazingly documented and massively supported. Python libraries are just hugely numerous in any domain you can imagine...
There should be a very simple way to translate Matlab to Julia that anyone can use easily.
Yeah, GPT-4 "could* do that - but I don't think very many people would ever do that. Why? Because often Matlab code is private and the owners can't give their code over to some other person.
So some other method e.g. Galpaca distilled or Llama-based distil by step for Matlab->Julia translation model should be popularized.
Especially with the distill step by step you could get something that runs quite efficiently on a laptop.
People get settled in their toolsets. Besides, Julia does have a lot of syntax similarities to Matlab, but that doesnt mean that switching is straightforward.
It’s still a completely different language with it’s own paradigms, semantics, ecosystem, etc.
At least in the past, there were some issues with the licensing of dependencies. For example, both Julia and MATLAB were dependent on a variety of routines from SuiteSparse such as the Choleski factorization and the QR factorization. These routines are dual licensed commercial/GPL. The difference is that your MATLAB license gave the ability to use things like the MATLAB Compiler to distribute somewhat closed code to a client because they, ostensibly, have a license for SuiteSparse. Julia did not, so any project compiled into a stand alone executable would be subject to GPL unless an additional license for the dependencies was purchased. Now, if you're only distributing the source code, which most people do, this doesn't matter as much, but we should all be aware of our license responsibilities. MATLAB has more functionality built-in and I trust they've done the legal legwork for their included routines, so I don't have to.
To be clear, Julia constantly updates and I'm sure many of its original dependencies are being recoded to help address this issue. I still think it's worth a check and audit if someone wants to put together a broader project as to not get burned by this issue later.
While Matlab language is horrific, switching requires much more than that. Matlab has a lot of inertia in large scientific projects. It will change, eventually, but this is a slow process. Fortran was the tool for numerical computation for decades even when languages with better syntax were available because of its fast, rock solid and validated libraries.
On technical merits Matlab still comes ahead in fast plotting of large datasets. And Julia still has a reputation for a "go fast and break things" community and the corresponding cavalier response to bugs. Those will slowly change, but as of now, Matlab holds an advantage there. My 2c.
This makes a big difference in usability, before loading a big project was almost in the "coffee time" category, now it's more "wait a few seconds". It helps a lot to make the tool feel more responsive.
So, you mean that loading a bigger project in Julia was more or less equal to compiling it with some language like C++? And you had to do it every single time in order to work with the project? This doesn’t sound too good tbo.
Congratulations on the release! Package extensions and package images are a huge boost to the usability of Julia.
To all Julia users: Go forth, and make use of PrecompileTools.jl in your packages! The latency only drops if you actually make use of precompilation, and it's pretty easy to use. I can't wait for more of the ecosystem to start making use of it.
I really like "Julia, the programming language" and had a great experience using it on the few occasions, where it made sense. But whenever a colleague asks me, if I can recommend it, I have to say "no".
The crux is, that its "just-ahead-of-time" compiler disqualifies it for a lot of use cases: I actually would prefer it over Python for small scripts, but the compilation overhead is too long.
On the other hand I would use it over C++ for some applications, when it could easily produce portable binaries.
With the steady progress in improving precompilation, I'm optimistic to use it more often in the future, though.
Yeah I agree. It's good for specific use cases where the JIT latency doesn't matter too much - which means either interactive work, or long-running computations. So, mostly science/engineering work, and perhaps stuff like generative art, building wbsites and stuff.
When latency is much better and/or it can compile static binaries, the use case of Julia will hopefully broaden
> I actually would prefer it over Python for small scripts, but the compilation overhead is too long
Looks like this release reduces that by a lot, see the first section in the OP on caching native code, modulo adoption of good precompilation habits by the various packages.
I have the same feelings as you: I stick to Python for tasks that aren't worthy of a long compilation because the execution time would be small, but I always use Julia for computation-intensive tasks.
However, because of this, when somebody asks me if I would recommend Julia to them, instead of answering “no” I just say “it depends”
I feel this release might finally make Julia worth considering again. Previously loading time for something as simple as opening a csv and plotting it was a deal breaker.
I used to have a reasonably simple notebook for a paper which took about 35 minutes to compile on an old university-provided CPU; even when opening it for the second time.
Therefore, I‘m really excited for the improvements in code caching! Thanks to Tim Holy, Jameson Nash, Valentin Churavy, and others for your work
we had a big julia push this month after 2 years of just messing around. It's better than APL to read ( so is Sanskrit) but we hit a SCREECHING halt when we realized that it wasn't going to happen that we could our streaming data with Pluto notebooks on the web. Pluto Notebooks are wonderful and can handle streaming data just not on a hosted web page with multiple people using it. We tried to use Stipple.jl ( part of GENIE.jl) and that kept freezing ( we suspect because of pacing issues so 1 sec plus should be fine). The point of all of this is that we have found julia to be GREAT to build the back end stuff but not for manipulation of streaming data on the web. We can easily fix this with ZMQ and send the data to Python but julia was supposed to be a 1 language solution. We're trying to dodge the web side of things with Humane so maybe we'll be happier bunnies in 2024
> when we realized that it wasn't going to happen that we could our streaming data with Pluto notebooks on the web
It sounds like the issue is probably unrelated to using Pluto, and likely more to do with the streaming libraries used and memory management - but that's just a guess based on the minimal info here. When you say it couldn't handle streaming data, what issues did you have? By "streaming data with Pluto notebooks on the web" do you mean PlutoSliderServer or something else?
FWIW, Fons and co are very responsive to user issues (for eg. on the Zulip pluto channel [1]), so if you haven't tried that already, I'd recommend that. Similarly with Stipple, I believe they're trying to build a company out of it, so they'll probably be very receptive to business use cases and making them work.
Can I ask what you would "easily" do after you "send the data to Python"? What Python framework would you use to easily build interactive real-time streaming data apps?
I'm asking because I work on Shiny, a reactive web framework for Python (and R) that aims to solve this problem well, and I'm having trouble figuring out how Python people have been doing this sort of thing. It's straightforward with a lower-level web framework like Flask/FastAPI but then you lose the nice reactive properties of something like Stipple.jl (and Shiny).
> Users can also create custom local "Startup" packages that load dependencies and precompile workloads tailored to their daily work.
That's big! Now I can add packages to my startup.jl without having to worry that every single REPL startup will be slowed down by them. This also eases the pain of things being moved away from the standard library, since we can just add them back to the base environment and load them at startup, making it basically the same thing.
Note there's a distinction between "startup.jl" and "Startup.jl": the latter is a package, not a script. That's necessary to allow precompilation. But you can add `using Startup` to your "startup.jl" so that it gets loaded automatically. Fortunately, it's very easy to create these personal packages, see intructions at https://julialang.github.io/PrecompileTools.jl/stable/#Tutor...
Two very nice additions to the REPL that weren't mentioned in the highlights:
* `Alt-e` now opens the current input in an editor. The content (if modified) will be executed upon exiting the editor
* A "numbered prompt" mode which prints numbers for each input and output and stores evaluated results in Out can be activated with REPL.numbered_prompt!() (basically `In[3]` `Out[3]` markers like in Mathematica/Jupyter).
I'm quite interested in the interactive thread pool (although I assume it works based on conventions of everyone playing nice). Julia seems to have a powerful parallelism model but it couldn't apply it to responsive GUI and web frameworks that requires low latency, so it is nice if you indeed can have for example the tasks handling HTTP request focusing on handling it as fast as possible while the background working threads dealing with larger computations use all the speed of the Julia language without being constantly interrupted.
> Pkg.add can now be told to prefer to add already installed versions of packages (those that already have been downloadedd onto your machine)
> set the env var `JULIA_PKG_PRESERVE_TIERED_INSTALLED` to true.
How is this different from setting `Pkg.offline(true)` and then doing the `add`? I don't know the intricacies of how it works, but that's what I've been doing when I just need to try something out in a temp environment.
One difference is that Pkg.offline(true) will error if it cannot resolve something with packages already installed while with this option it will fall back to downloading new versions.
> We came to the conclusion that a global fastmath option is impossible to use correctly in Julia.
I'd assumed that global fastmath was a bad idea in general, and assumed that was the reason for making this a no-op. Is there a reason it's particularly bad in Julia, some assumptions the standard library makes or something?
It is bad in general, but it ends up being worse in Julia because C and C++ generally aren't compiled with whole program optimization. global fastmath is more aggressive the more you inline, and in C/C++ the math library is usually a statically linked library which creates an inlining barrier. Julia has all the code at runtime, and therefore is often able to run faster by inlining more code. The downside of this is that a global fastmath flags will optimize more than you think they should and give even more wrong answers than usual.
From the example in the article (exp) it sounds like they are implementing highly optimized versions of transcendental functions. This is great! One of the reason gfortran is so much slower than the intel fortran compiler is the slow special functions it uses. However those tricks appear to degrade badly under some LLVM optimizations that are enabled with fastmath.
It makes sense to optimize for the non-fast math case because that's the recommended setting, and I guess having two implementations of all the (very important, easy to mess up, core) special functions + all the testing infra to check that they work correctly on all platforms was probably deemed too much work for marginal benefits.
From a non-technical point of view (since the technical answer was already provided) I think this sort of magic optimizations is a double edge sword in any language. No matter what you do there will be corner cases that need manual tuning that become inaccessible behind some init option.
fastmath is bad in general. In Julia it's not as bad as many other languages because it's attempted to be kept local. The @fastmath macro is essentially a find-replace macro, where it finds things like ^ and replaces it with the fast_pow function which drops the 1ulp requirement. However, this global option was really the one piece left in Julia that where it could creep in, hence the reason to drop it.
> It turns out (somewhat insanely) that when -ffast-math is enabled, the compiler will link in a constructor that sets the FTZ/DAZ flags whenever the library is loaded — even on shared libraries, which means that any application that loads that library will have its floating point behavior changed for the whole process. And -Ofast, which sounds appealingly like a "make my program go fast" flag, automatically enables -ffast-math, so some projects may unwittingly turn it on without realizing the implications.
With Julia, there is the advantage here that (a) most libraries don't have binary artifacts being built in another language, (b) the Julia core math library is written in Julia and is thus not a shared library effected by this, and (c) those that do have their binaries built and hosted in https://github.com/JuliaPackaging/Yggdrasil. So in the binary building and delivery system you can see there are some patches that forcibly remove fastmath from the binaries being built to avoid this problem (https://github.com/search?q=repo%3AJuliaPackaging%2FYggdrasi...). The part (b) of course is the part that is then made globally safe by the removal of the flag in Julia itself, so generally Julia should be well-guarded against this kind of issue with these sets of safeguards in place.
"Together with PrecompileTools.jl, Julia 1.9 delivers many of the benefits of PackageCompiler without the need for user-customization."
does it mean I still have to invoke special workflows and commands to get compilation benefits or does it work out of the box for normal julia invocations?
PrecompileTools works out of the box - in the sense that the package developer needs to add a "@compile_workload" block in their package, but the users don't need to do anything. There is no special workflow or command to use it.
The tradeoffs are somewhat larger load times (TTL), increased precompilation time (because some of the compilation moves to precompile time), and increased disk usage by the package.
I like to explore alternatives to Python and Julia has been one of the tools I am waiting to become mature enough to actually invest some time in. But every time I start reading threads, I see the comments from actual users reporting about half an hour minutes and “coffee time” project compilation. Then the dreaded ecosystem problem. Then I think to myself, well, it’s not the time yet.
Also, I wish Julia was as popular in Europe as it is overseas.
Could you elaborate on the ecosystem problem? For my corner of the world, Julia probably has one of the highest quality ecosystem (differential equations, physics modeling, autodiff through very complicated code, probabilistic programming, SIMD/multithreading, and wonderful plotting libraries (the Makie.jl ecosystem) and good data wrangling capabilities (the Dataframes.jl ecosystem)).
I am curious what are the fields where it is less well developed?
Honestly, I'd stick with python or learn a statically compiled language to broaden your world. I spent years in the Julia situation and it's more of a cult than anything else. If you ever end up with a job asking for Julia(not likely), you can pick it up in a week or so of free time after some muscle memory kicks in.
I didn't even know some of these things were being worked on until recently. I totally understand why devs don't treat development like a Twitter feed, posting every thought that pops into their head instead of working. However, it would be really interesting to follow some of these developments without having to deep lurk all the PRs.
A lot of things are shared on a daily basis. There's a lot of open discussion on the various community channels like Discourse and Slack: the #ttfx channel on slack for example is a great one to follow to keep up with the latency changes and report wins and losses of different changes. There's a lot of random package devs testing each PR to show how the different changes are effecting their package. One that comes to mind is the Trixi.jl folks which are sharing the result of almost every update with a bunch of plots to track the latency changes. See https://julialang.org/community/ for a full list of community channels.
Things of course only show up in the HN front page when they reach a sexy conclusion, which also means that what shows up on HN is a very biased subset of the discussion which omits most subtlety and posts the biggest speedup numbers. Most of the day-to-day of course is things more 10% changes in some case, where only when compounded 100 times you finally have a story the general HN public cares to hear. This also generally means that the long discussions of caveats and edge cases is also filtered from what most of the public tends to generally read (it's just difficult to capture some things in a blog post in any concise way), so if you care for the nuance I highly recommend joining some of the Slack channels.
The NEWS.md is created pretty early on in the process, so you can track that to see "fresh off the oven" changes. For eg. here's the one for Julia 1.10 (goes without saying that it's incomplete, subject to change, etc.): https://github.com/JuliaLang/julia/blob/master/NEWS.md
I have been a fly on the wall in the ttfx channel on their slack. And there are a few other channels about julia internals. I do not have anything to contribute there, but it is fascinating to learn various julia internal details from listening in on these threads.
The remaining issues I had are: I heard there are still bugs in the standard library regarding changing index offsets (from 1 to 0 for example), and IIRC also the language build depends on a fork of LLVM (https://github.com/JuliaLang/llvm-project)
Are both of those still true? I'm a zero-index guy, but having index offsets is fine as long as the standard library is high quality. As for LLVM, I'd prefer it not need a fork but that's less important.
I'm not aware of bugs with offset arrays in the standard library. It's happened before and it may happen again, but Base and the standard library are generally very good at avoiding that.
The main problem is non-standard library packages that were written back in early julia days before OffsetArrays existed (e.g. a big offendeder IIRC was StatsBase.jl), and so wasn't written with any awareness of how to deal with generic indexing.
OffsetArrays.jl are a neat trick, and sometimes they really are useful e.g. when mimicing some code that was written in a 0-based language, or just when you're working with array offsets a lot, but I wouldn't really recommend using them everywhere. Other non-array indexable types like Tuple don't have 0-based counterparts (as far as I'm aware), so you'll be jumping back and forth from 0-based and 1-based still, and it's just an extra layer of mental load.
Honestly though, it's often not very necessary to talk about array indices at all. The preferred pattern is just to use `for i in eachindex(A)`, `A[begin]`, `A[end]` etc.
Yes, we use a fork of LLVM, but not because we're really changing it's functionality, just because we have patches for bugs. The bugs are typically reported upstream and our patches are contributed, but the feedback loop is slow enough that it's easiest to just maintain our own patched fork. We do keep it updated though (this release brings us up to v14) and there shouldn't be any divergences from upsteam other than the bugfixes as far as I'm aware
> To analyze your heap snapshot, open a Chromium browser and follow these steps: right click -> inspect -> memory -> load. Upload your .heapsnapshot file, and a new tab will appear on the left side to display your snapshot's details.
Can the same be done with Firefox's `about:memory`'s `Load...` button, or is it Chromium specific?
Why is this comment necessary on every Julia-related post? I don't even use Julia outside tutorials but this adds no value beyond things that have already been said N number of times.
Every programming language doesn't need to become _the_ language to do something. They are experiments in how to best express what you want to compute. Even if Julia never takes off, they explore multiple directions other languages might want to implement - multiple dispatch for polymorphism, nested parallelism, macros so you can create DSLs from regular Julia code, and so much more. Asserting that Julia is only successful if everyone is using it is just super reductive.
For me, the issue is that there's too much magic happening, e.g. with macros, but finding or solving the issues seems needlessly convoluted. Haskell, Rust, Python, or even C++ feel a lot less magic, and more reasonable despite all being quite different.
acalmon|2 years ago
I've been a heavy Julia user for +4 years and adore this ecosystem. I use Julia for parallel computing, modeling and solving large-scale optimization problems, stochastic simulations, etc. During the last year or so, creating plots and dashboards has become much easier too.
Julia makes it surprisingly easy to go from "idea" to "large-scale simulation". I've used it in production and just for prototyping/research. I can engage with Julia as deeply as I would with C code or as "lightly" as I engage with Matlab/R.
I'm excited to see what comes next.
brancz|2 years ago
Extra excited that the project I happen to work on (the Parca open source project[2]) influenced this change [3][4]. Shout out to Valentin Churavy for driving this on the Julia front!
[1] https://github.com/JuliaLang/julia/commit/06d4cf072db24ca6df...
[2] https://parca.dev/
[3] https://github.com/parca-dev/parca-demo/pull/37
[4] https://github.com/JuliaLang/julia/issues/40655
aborsy|2 years ago
I provide the option of Julia in my tutorials. Students are lazy, and don’t want to explore something new. Most of them stick with matlab.
What prevents matlab users from switching? The syntax is similar.
jasode|2 years ago
Choosing a programming language based on just comparing the language syntax only works for academic settings or toy projects for self-curiosity and learning. Once you consider adopting a language for complicated real-world industry usage, you have to look beyond the syntax and compare ecosystem to ecosystem.
E.g. Look over the following MATLAB "toolboxes" and add-ons developed over decades: https://www.mathworks.com/products.html
Julia doesn't have a strong equivalent ecosystem for many of those. In that MATLAB product list is Simulink. Tesla uses that tool to optimize their cars: https://www.mathworks.com/company/newsletters/articles/using...
You can take a look at some of the 1-minute overview videos to get a sense of MATLAB toolboxes that companies pay extra money for: https://www.youtube.com/results?search_query=discover+matlab...
It has add-ons such as medical imaging toolkit, wireless communications (antenna signal modeling), etc. And MATLAB continues releasing new enhancements that the Julia ecosystem doesn't keep up with.
If one doesn't need any of the productivity tools that MATLAB provides, Julia becomes a more realistic choice.
Or to put it another way, companies didn't really "choose the MATLAB programming language". What they really did was choose the MATLAB visual IDE and toolkits -- which incidentally had the MATLAB programming language.
geokon|2 years ago
It's got a frustrating "not fun" on-boarding. ie. the number of minutes from downloading "Julia" to getting cool satisfying results
1. It not a calculator on steroids like Matlab. It doesn't have one main open source IDE like Octave/Rstudio that you can drop in and play around in (see plots docs repl workspace)
2. The default language is more like a "proper programming language". To even make a basic plot you need to import one of a dozen plotting libraries (which requires learning how libraries and importing works - boring ..) and how is someone just getting started to decide which one..? I don't need that analysis paralysis when I'm just getting started
3. Documentation .. Well it's very hard to compete with Matlab here - but the website is not as confidence inducing. The landing page is a wall of text: https://docs.julialang.org/en/v1/ Tbh, from the subsequent manual listing it's not even clear it's a math-focused programming language . It's talking about constructors, data types, ffi, stack traces, networking etc etc.
derstander|2 years ago
I’ll offer my perspective. I’m switching little-by-little. Why don’t I just rip off the bandaid and switch whole-hog immediately? The literally twenty years of tools I’ve developed in MATLAB.
At work, I’m paid to perform analyses, develop algorithms, and document that work to my colleagues and our sponsors. I’m not paid to learn Julia. If I’m working on something completely novel where none of the MATLAB building blocks I’ve wrote over the years are useful, or the porting time is a small (for some arbitrary and subjective definition) of the overall task time, then I’ll do it in Julia.
The toolboxes aren’t a huge sticking point for me: Mathworks has only somewhat recently developed toolboxes geared toward my particular domain, so none of my code relies on them. My ability to share with colleagues is a bit of a sticking point. We’re predominantly a MATLAB shop (and this seems true not just at my company, but in our particular niche in industry). There has been some movement toward Python. But if it’s anything like the transition from Fortran to MATLAB — which was still on-going when I started in the early 2000s — then a full switch to either Python or Julia is still a ways off.
qsi|2 years ago
I'm very comfortable in Matlab and often know immediately what's wrong when I hit those oddities. In Python it usually means I spend considerable time googling and tinkering before I even understand what I did wrong because I have far less experience... Same when I tinker with Julia.
And for some people it being a Real Programming Language may be a disadvantage actually... Typically means you need a better understanding than just type-and-run.
kristofferc|2 years ago
I also did this and only the students that were well ahead of the rest gave it a try. But I think this is natural, many students are already pushed to the limit to manage the current set of studies. Learning a whole new language on top of that introduces a lot of extra risk and has an opportunity cost that might not make it feasible (unless you are already doing very well).
I don't think it is so much that students are lazy but more that the current way that the study plan is made in universities doesn't allow for much risk taking by the students. So they will just go the "cookie cutter" way.
nunuvit|2 years ago
It's not possible to understand why someone would prefer Matlab through a software engineer's lense. Our jobs are different and a real programming language isn't anywhere on our list of needs or wants for matrix laboratory work. We actually prefer semantics like copy-on-write and features like dynamically adding fields to structs at runtime. It fits our workflow better, and that's ultimately what matters most.
I'm sure one day I'll add Julia to the list of real programming languages that I've used to write a library, but I'll still wrap it and call it from Matlab just like everything else.
MagnumOpus|2 years ago
Julia as a compiled language is faster and more distributable than either, but there is a chicken-egg problem about the ecosystem. MathWorks' provided libraries for Matlab are excellent, amazingly documented and massively supported. Python libraries are just hugely numerous in any domain you can imagine...
chaxor|2 years ago
So some other method e.g. Galpaca distilled or Llama-based distil by step for Matlab->Julia translation model should be popularized.
Especially with the distill step by step you could get something that runs quite efficiently on a laptop.
stodor89|2 years ago
Pretty much everything students do is new to to them.
eigenspace|2 years ago
It’s still a completely different language with it’s own paradigms, semantics, ecosystem, etc.
kxyvr|2 years ago
To be clear, Julia constantly updates and I'm sure many of its original dependencies are being recoded to help address this issue. I still think it's worth a check and audit if someone wants to put together a broader project as to not get burned by this issue later.
ptero|2 years ago
On technical merits Matlab still comes ahead in fast plotting of large datasets. And Julia still has a reputation for a "go fast and break things" community and the corresponding cavalier response to bugs. Those will slowly change, but as of now, Matlab holds an advantage there. My 2c.
jbieler|2 years ago
tastyminerals2|2 years ago
jakobnissen|2 years ago
To all Julia users: Go forth, and make use of PrecompileTools.jl in your packages! The latency only drops if you actually make use of precompilation, and it's pretty easy to use. I can't wait for more of the ecosystem to start making use of it.
npalli|2 years ago
----------------------------
JULIA 1.8.5
julia> @time using Plots
11.341913 seconds (14.83 M allocations: 948.442 MiB, 6.88% gc time, 12.73% compilation time: 62% of which was recompilation)
julia> @time plot(sin.(0:0.01:π))
3.342452 seconds (8.93 M allocations: 472.925 MiB, 4.44% gc time, 99.78% compilation time: 78% of which was recompilation)
-----------------------------------
JULIA 1.9.0
julia> @time using Plots;
2.907620 seconds (3.43 M allocations: 195.045 MiB, 7.52% gc time, 5.61% compilation time: 93% of which was recompilation)
julia> @time plot(sin.(0:0.01:π))
0.395429 seconds (907.48 k allocations: 59.422 MiB, 98.54% compilation time: 74% of which was recompilation)
majoe|2 years ago
With the steady progress in improving precompilation, I'm optimistic to use it more often in the future, though.
jakobnissen|2 years ago
When latency is much better and/or it can compile static binaries, the use case of Julia will hopefully broaden
stellalo|2 years ago
Looks like this release reduces that by a lot, see the first section in the OP on caching native code, modulo adoption of good precompilation habits by the various packages.
ziotom78|2 years ago
However, because of this, when somebody asks me if I would recommend Julia to them, instead of answering “no” I just say “it depends”
joelthelion|2 years ago
huijzer|2 years ago
Therefore, I‘m really excited for the improvements in code caching! Thanks to Tim Holy, Jameson Nash, Valentin Churavy, and others for your work
eigenspace|2 years ago
> 35 minutes to compile
What kind of CPU are we talking about here!?
mrsofty|2 years ago
sundarurfriend|2 years ago
It sounds like the issue is probably unrelated to using Pluto, and likely more to do with the streaming libraries used and memory management - but that's just a guess based on the minimal info here. When you say it couldn't handle streaming data, what issues did you have? By "streaming data with Pluto notebooks on the web" do you mean PlutoSliderServer or something else?
FWIW, Fons and co are very responsive to user issues (for eg. on the Zulip pluto channel [1]), so if you haven't tried that already, I'd recommend that. Similarly with Stipple, I believe they're trying to build a company out of it, so they'll probably be very receptive to business use cases and making them work.
[1] https://julialang.zulipchat.com/#narrow/stream/243342-pluto....
jcheng|2 years ago
I'm asking because I work on Shiny, a reactive web framework for Python (and R) that aims to solve this problem well, and I'm having trouble figuring out how Python people have been doing this sort of thing. It's straightforward with a lower-level web framework like Flask/FastAPI but then you lose the nice reactive properties of something like Stipple.jl (and Shiny).
sundarurfriend|2 years ago
That's big! Now I can add packages to my startup.jl without having to worry that every single REPL startup will be slowed down by them. This also eases the pain of things being moved away from the standard library, since we can just add them back to the base environment and load them at startup, making it basically the same thing.
tholy|2 years ago
sundarurfriend|2 years ago
* `Alt-e` now opens the current input in an editor. The content (if modified) will be executed upon exiting the editor
* A "numbered prompt" mode which prints numbers for each input and output and stores evaluated results in Out can be activated with REPL.numbered_prompt!() (basically `In[3]` `Out[3]` markers like in Mathematica/Jupyter).
sundarurfriend|2 years ago
ddragon|2 years ago
sundarurfriend|2 years ago
> set the env var `JULIA_PKG_PRESERVE_TIERED_INSTALLED` to true.
How is this different from setting `Pkg.offline(true)` and then doing the `add`? I don't know the intricacies of how it works, but that's what I've been doing when I just need to try something out in a temp environment.
kristofferc|2 years ago
sundarurfriend|2 years ago
I'd assumed that global fastmath was a bad idea in general, and assumed that was the reason for making this a no-op. Is there a reason it's particularly bad in Julia, some assumptions the standard library makes or something?
adgjlsfhk1|2 years ago
xmcqdpt2|2 years ago
It makes sense to optimize for the non-fast math case because that's the recommended setting, and I guess having two implementations of all the (very important, easy to mess up, core) special functions + all the testing infra to check that they work correctly on all platforms was probably deemed too much work for marginal benefits.
NeuroCoder|2 years ago
ChrisRackauckas|2 years ago
There are still some other difficulties of course, since fastmath in the C ABI is quite wild (or I guess, it's really the GCC implementation up to GCC 13 (https://gcc.gnu.org/bugzilla/show_bug.cgi?id=55522#c45)). Simon wrote a nice piece about the difficulties in general: https://simonbyrne.github.io/notes/fastmath/. In a general sense there is still the potential vulnerability that effects the Python ecosystem which is that if any package has binaries built with fastmath it could cause other calculations to be fastmath as well in a non-local way (https://moyix.blogspot.com/2022/09/someones-been-messing-wit...):
> It turns out (somewhat insanely) that when -ffast-math is enabled, the compiler will link in a constructor that sets the FTZ/DAZ flags whenever the library is loaded — even on shared libraries, which means that any application that loads that library will have its floating point behavior changed for the whole process. And -Ofast, which sounds appealingly like a "make my program go fast" flag, automatically enables -ffast-math, so some projects may unwittingly turn it on without realizing the implications.
With Julia, there is the advantage here that (a) most libraries don't have binary artifacts being built in another language, (b) the Julia core math library is written in Julia and is thus not a shared library effected by this, and (c) those that do have their binaries built and hosted in https://github.com/JuliaPackaging/Yggdrasil. So in the binary building and delivery system you can see there are some patches that forcibly remove fastmath from the binaries being built to avoid this problem (https://github.com/search?q=repo%3AJuliaPackaging%2FYggdrasi...). The part (b) of course is the part that is then made globally safe by the removal of the flag in Julia itself, so generally Julia should be well-guarded against this kind of issue with these sets of safeguards in place.
kdheepak|2 years ago
singularity2001|2 years ago
does it mean I still have to invoke special workflows and commands to get compilation benefits or does it work out of the box for normal julia invocations?
jakobnissen|2 years ago
The tradeoffs are somewhat larger load times (TTL), increased precompilation time (because some of the compilation moves to precompile time), and increased disk usage by the package.
arijun|2 years ago
datadeft|2 years ago
- time-to-first-execution (TTFX)
- time-to-load (TTL)
tastyminerals2|2 years ago
Also, I wish Julia was as popular in Europe as it is overseas.
krastanov|2 years ago
I am curious what are the fields where it is less well developed?
DNF2|2 years ago
markkitti|2 years ago
cookieperson|2 years ago
NeuroCoder|2 years ago
Sorry, pretty shallow complaint. Great work!
ChrisRackauckas|2 years ago
Things of course only show up in the HN front page when they reach a sexy conclusion, which also means that what shows up on HN is a very biased subset of the discussion which omits most subtlety and posts the biggest speedup numbers. Most of the day-to-day of course is things more 10% changes in some case, where only when compounded 100 times you finally have a story the general HN public cares to hear. This also generally means that the long discussions of caveats and edge cases is also filtered from what most of the public tends to generally read (it's just difficult to capture some things in a blog post in any concise way), so if you care for the nuance I highly recommend joining some of the Slack channels.
sundarurfriend|2 years ago
jakobnissen|2 years ago
I know this doesn't inform you about dev work on Julia in general, but it goes into detail with the recent improvements to latency
krastanov|2 years ago
torrance|2 years ago
dekhn|2 years ago
Are both of those still true? I'm a zero-index guy, but having index offsets is fine as long as the standard library is high quality. As for LLVM, I'd prefer it not need a fork but that's less important.
eigenspace|2 years ago
The main problem is non-standard library packages that were written back in early julia days before OffsetArrays existed (e.g. a big offendeder IIRC was StatsBase.jl), and so wasn't written with any awareness of how to deal with generic indexing.
OffsetArrays.jl are a neat trick, and sometimes they really are useful e.g. when mimicing some code that was written in a 0-based language, or just when you're working with array offsets a lot, but I wouldn't really recommend using them everywhere. Other non-array indexable types like Tuple don't have 0-based counterparts (as far as I'm aware), so you'll be jumping back and forth from 0-based and 1-based still, and it's just an extra layer of mental load.
Honestly though, it's often not very necessary to talk about array indices at all. The preferred pattern is just to use `for i in eachindex(A)`, `A[begin]`, `A[end]` etc.
> and IIRC also the language build depends on a fork of LLVM (https://github.com/JuliaLang/llvm-project)
Yes, we use a fork of LLVM, but not because we're really changing it's functionality, just because we have patches for bugs. The bugs are typically reported upstream and our patches are contributed, but the feedback loop is slow enough that it's easiest to just maintain our own patched fork. We do keep it updated though (this release brings us up to v14) and there shouldn't be any divergences from upsteam other than the bugfixes as far as I'm aware
g0wda|2 years ago
sundarurfriend|2 years ago
Can the same be done with Firefox's `about:memory`'s `Load...` button, or is it Chromium specific?
borodi|2 years ago
unknown|2 years ago
[deleted]
kettleballroll|2 years ago
[deleted]
thelastbender12|2 years ago
Every programming language doesn't need to become _the_ language to do something. They are experiments in how to best express what you want to compute. Even if Julia never takes off, they explore multiple directions other languages might want to implement - multiple dispatch for polymorphism, nested parallelism, macros so you can create DSLs from regular Julia code, and so much more. Asserting that Julia is only successful if everyone is using it is just super reductive.
zorked|2 years ago
I'm old enough to remember when every Python came with the mandatory "oh no, significant whitespace" post. Now it's just the most mainstream language.
eigenspace|2 years ago
“Well, Ive got nothing substantive to say, so I better start whining about surface level syntax stuff I don't like”
pjmlp|2 years ago
https://juliahub.com/case-studies/
samuell|2 years ago
There was recently a tutorial on Julia for biologists in Nature Methods:
https://www.nature.com/articles/s41592-023-01832-z
That said, I imagine things might get interesting again if/when Modular open sources their Mojo/"Python" compiler.
henearkr|2 years ago
To me, it looks like, to the contrary, that Python is so full of counter-intuitive bits... like doing a[begin:end+1] to take a slice.
PartiallyTyped|2 years ago