top | item 47178563

(no title)

stabbles | 3 days ago

Back in the day I had trouble convincing my C++ friends to give Julia a try, because Julia's garbage collector was a showstopper. But if you follow these performance tips related to pre-allocation, in-place mutation, avoiding temporary allocations (and maybe avoiding cycling references), you don't struggle with GC performance issues.

Looking back, I think the tooling Julia had from the start, combined with the REPL, made it actually really nice to fix these performance issues. Much better than compiling and linking a binary and running it through some heap profiler tool. You could simply do

    julia> @time f()
      x.y seconds (N allocations: M bytes)
and iteratively improve part of the code base instead of profiling the entire application.

(To be fair: back then escape analysis was not implemented in the compiler, and it was hard to avoid silly allocations)

discuss

order

VorpalWay|3 days ago

The issue with writing high performance code in GC languages is that you end up going out of your way writing things in a strange way to avoid the GC. At that point you might as well use a non-GC language. In my experience, the most natural way to write things in Rust is usually the fastest (or close enough) as well.

Note: I don't know Julia specifically, but this does apply to other GC languages like Ocaml and Java. Try reading code in those languages that avoid the GC. It looks very strange. In python it is even worse: people try to avoid for loops because they are slower than list comprehensions for example (or at least used to, I haven't written much python for some years now).

eigenspace|3 days ago

Julia has a big culture and a lot of interfaces built around writing non-allocating code. We sometimes even overemphasize eliminating GC allocations from stuff.

Generally, the code ends up looking rather similar to non-GC languages. You create some buffers outside of your performance-sensitive parts, and then thread them through your code so they can be accessed and re-used in the hot loop or whatever.

It could be better, e.g. C++ and Rust both have some nice utilities for this stuff that are a bit hard to replicate in Julia, but it's not auch a huge difference, and there's also a lot of advantages on the julia side.

E.g. it's really nice to have the GC available for the non-performance critical parts of your code.

SatvikBeri|3 days ago

There are some big advantages to having it in the same language. You can write the easy, non-performant version quickly and gradually refactor while having an easy test case. This is especially nice in situations where you expect to throw away a lot of the code you write, e.g. research. Also, you don't have to write most code in a super performance obsessed way – Julia makes it really easy to find the key 5% that needs to be rewritten.

We've ported some tens of thousands of lines of numpy-heavy Python, and in practice our Julia code is actually more concise while being about 10x-100x more performant.

pjmlp|3 days ago

Depends on which GC language, people keep forgeting many have C++ like capabilities, besides having a GC.

D, C#, Swift, Nim,.....

Agree that in Julia's case the flexiblity is not quite there, still much better than using Python and then going to write most of the work in C, C++, Fortran,.....

Which is a thing that gets lost quite often in these discussions, just because the last 5% might be a bit harder, doesn't mean we have to throw everything away and start from scratch in another programming language, with its own set of problems.

dpc_01234|2 days ago

> In my experience, the most natural way to write things in Rust is usually the fastest (or close enough) as well.

Well, a lot of C/Odin/Zig people will point out that Rust's stdlib encourages heap allocations. For actually best performance you typically want to store your data in some data-oriented data model, avoid allocations and so on, which is not exactly against idiomatic Rust, but more than just a typical straighforward Rust just throwing allocations around.