Raku has metaprogramming and an almost unprecedented amount of syntax... and an insanely slow implementation. Although they're chipping away at the speed thing.
Metaprogramming will struggle to go mainstream. It's the way you shrink a 50,000 line program down to 1,500 lines that nobody can understand but the author. I absolutely love doing it and I think I could hand the code off to somebody and have them make small changes in the DSL code but if they have to change something in the implementation to extend the DSL all bets are off.
After all these years we still don't have macro assemblers as good as what the IBM 360 had even though we now have architectures that have enough registers that it would be reasonable to pass a register name as an argument to a macro like you could do in IBM Macro Assembler.
Like most scripting languages (e.g. Python), execution speed was not the top priority - in fact, like Python, most heavy lifting can be / is done by modules that are written in native C code, or Rust or similar via the C FFI / Inline::Perl interfaces.
To your point, I recently measured the compile time of this raku module...
# speed (2020)
# use Physics::Measure :ALL; ...13s first-, 2.8s pre- compiled
# speed (2024)
# use Physics::Measure :ALL; ...4.4s first-, 0.9s pre- compiled
... so about a 3x speed up in the last 4 years.
Also raku has no GIL and has good support for hyper / race so can get a lot out of your 32 cores (if you want speed).
Another raku module to mention (Dan::Polars) connects to the Rust Polars library via FFI (thus getting Rust level execution speed since Polars a lot faster than Python Pandas via the underling Apache Arrow data structures) ... this takes about 2s for the raku to compile and about 15s for the rust cargo stack to compile ...
I actually did make a quick search to see if I was blowing hot air, and found this blog post that shows a bunch of benchmarks over time with a fairly typical Raku/Perl flavored text processing task, and it was taking 0.23s for the July 2022 release vs. 0.59s in Jan 2016 [1].
So that's a pretty impressive improvement--3x over 6 years--but I remember Raku being numbers like 4x or 5x slower than Python on benchmarks from the last few years, so by my very sloppy math it's still got to speed up by at least 2x or 3x to go to match Python.
It's also possible that there has been a ton of speedup in the last year-and-a-half since that benchmark, or it's not representative, but that's where I got the idea from.
PaulHoule|2 years ago
After all these years we still don't have macro assemblers as good as what the IBM 360 had even though we now have architectures that have enough registers that it would be reasonable to pass a register name as an argument to a macro like you could do in IBM Macro Assembler.
librasteve|2 years ago
To your point, I recently measured the compile time of this raku module...
# speed (2020)
# use Physics::Measure :ALL; ...13s first-, 2.8s pre- compiled
# speed (2024)
# use Physics::Measure :ALL; ...4.4s first-, 0.9s pre- compiled
... so about a 3x speed up in the last 4 years.
Also raku has no GIL and has good support for hyper / race so can get a lot out of your 32 cores (if you want speed).
Another raku module to mention (Dan::Polars) connects to the Rust Polars library via FFI (thus getting Rust level execution speed since Polars a lot faster than Python Pandas via the underling Apache Arrow data structures) ... this takes about 2s for the raku to compile and about 15s for the rust cargo stack to compile ...
lizmat|2 years ago
metaxy2|2 years ago
So that's a pretty impressive improvement--3x over 6 years--but I remember Raku being numbers like 4x or 5x slower than Python on benchmarks from the last few years, so by my very sloppy math it's still got to speed up by at least 2x or 3x to go to match Python.
It's also possible that there has been a ton of speedup in the last year-and-a-half since that benchmark, or it's not representative, but that's where I got the idea from.
[1] https://blogs.perl.org/users/sylvain_colinet/2023/01/benchma...