top | item 29399844

Julia 1.7 Highlights

237 points| logankilpatrick | 4 years ago |julialang.org | reply

95 comments

order
[+] fault1|4 years ago|reply
I'm really excited about Julia 1.8 and diffractor: https://github.com/JuliaDiff/Diffractor.jl

Keno Fisher did a presentation in a discussion moderated by Simon Peyton Jones here: https://www.youtube.com/watch?v=mQnSRfseu0c

Also combined with enzyme which can differentiate through static paths at the llvm bitcode level: https://enzyme.mit.edu/

I wonder what this will enable at the frontier of what is computationally tractable in the combo of physics + ml.

[+] plafl|4 years ago|reply
Julia is the Haskell of numerical computing.

It seems like AD is a solved problem, which is not. I can personally think two instances in which I have hand rolled my own gradients: in the context of recommender systems (dense matrices are evil) and right now in the context of real time collision detection (memory allocations are evil).

One of the domains where I hope Julia will excel is precisely as you comment on physics + ml (I check frequently if MuJoCo source code is available at last).

Anyway, I encourage everyone with any background in scientific computing or DS to have a look at Julia. The ecosystem is nowhere near Python yet, but the language is very good and the tooling is getting better. The performance Julia provides without interfacing C/C++ or Fortran is not just a convenience, it has architectural consequences. It's not about of coding faster, it's about coding "further".

[+] a-dub|4 years ago|reply
this talk and the referenced pair talk by matt bauman were really excellent.

basically the big takeaway that i walked away with was that they defined a sort of pseudoclosure in the guts of the language before the compiler which relaxes some of the behaviors of a closure (ie what gets captured) which then enables the creation of larger optimizable regions for the compiler in the context of doing autodiff.

the demo by matt bauman where they do autodiff on a function that prompts for and has the user type in the name of another julia mathematical function was really impressive!

[+] 22c|4 years ago|reply
If anyone else is wondering what the AD stands for in "Next Generation AD" it's "algorithmic differentiation".
[+] savant_penguin|4 years ago|reply
What's special about this compared to other ADs?

Reading the front page it seems to focus on efficient higher order derivatives, is that it?

[+] whimsicalism|4 years ago|reply
It would be awesome to get library/macro support for something like this in Rust.. it's llvm so theoretically should be able to hook in?

I've always thought that AD needs something akin to a "compile" step.

[+] matjet|4 years ago|reply
Multidimensional array construction is something I had looked forward to in Julia. I am not convinced that the approach taken in julia1.7 compares favorably with other language implementations (Ignoring R) In my view the numpy syntax has more clarity for this task. [[[1,2],[3,4]],[[5,6],[7,8]]] compared with [1 2;3 4;;;5 6;7 8]

It is not immediately clear why [1,2,3,4] is equivalent to [1;2;3;4], but [1,2,,3,4] (and [1,2;3,4] vs [1 2;3 4] ect) is not equivalent to [1;2;;3;4]. For creating a 3d slice, I expected that ";", ";;", ";;;" would each refer to incrementing a specific dimension. Eg, It seems intuitive that if you can create a 2d matrix with [1 2;3 4], then you should be able to make a 3d tensor with [1 2;3 4;;5 6;7 8]

[+] DNF2|4 years ago|reply
The semicolon use is actually completely consistent, ";", ";;", ";;;" etc. do indeed refer to incrementation of the corresponding dimension. Try [1;2;3], then [1;2;3 ;; 4;5;6] and then [1;2;3 ;; 4;5;6 ;;; 7;8;9;; 10;11;12]

The confusion arises because "," and whitespace also have overlapping meanings in array notation. "," is used for regular vector creation, and whitespace for concatenation in the second dimension. The coexistence of those two different notations is a bit uneasy, but I doubt that "," and whitespace will be deprecated, since they are so entrenched and familiar. And for 1D and 2D arrays (which probably make up >99% of all literal array use) it's also more elegant and clean.

Maybe this will help: "," separators only work for 1D arrays, you cannot use them while making 2D or higher arrays. Whitespace is used when you want to create the array writing the data down row-wise, so your innermost dimension in writing is actually the second dimension of the array. The semicolons are for completely consistently going from the first to the n'th dimension, with the corresponding number of ";" in each dimension.

I think it would be hard to come up with a nice way to express these in a unified notation.

The way numpy does this with lots of brackets isn't really very convenient when working in 2D, which is the more common case.

[+] DNF2|4 years ago|reply
To address the specific examples: [1,2,3,4] is literal vector creation, and also "," is just the regular way you create a list of inputs to a function. [1;2;3;4] is concatenation along the first dimension, so it must be the same as [1,2,3,4].

[1,2,,3,4] has no meaning, because repeated "," hasn't been given any syntactical meaning. But maybe that would have been a good idea?

[1,2;3,4] mixes literal vector syntax and vertical concatenation. The only reasonable interpretation would be that it's the same as [1;2;3;4], so maybe it could have been allowed, but ";" is supposed to concatenate arrays, with a special case for scalars (0-dimensional arrays), it's not clear to me what would be concatenated in [1,2;3,4].

[1 2; 3 4] on the other hand, concatenates two row vectors vertically, so this has a clear meaning. It can't be equivalent to [1;2;;3;4], since that has 1 and 2 lying along a column not a row.

A 3D tensor can't be [1 2;3 4;;5 6;7 8], since it only has ";;" while concatenation along the 3rd dimension must be ";;;". The notation [1 2;3 4;;; 5 6;7 8] works for this, but mixing whitespace notation and ";" notation is confusing.

So, clearly this is all a bit complicated, but it is a solution to a somewhat complicated problem, where you both need to allow new, consistent, notation, while simultaneously keeping the historical notation, which is in fact better in the most common (lower-dimensional) cases.

[+] wodenokoto|4 years ago|reply
Yeah, I looked up the manual and completely fail to understand how the syntax is supposed to be read and written.
[+] Mandelmus|4 years ago|reply
Agreed, I also found that rather confusing.
[+] sundarurfriend|4 years ago|reply
This is a really nice write up, hope we get one of these at least once every few releases.

The multi-; syntax is something that both looks weird at first glance but is also really convenient and satisfying in its consistency. The weirdness factor will likely go down as we get used to seeing this new construct, while the convenience/ease-of-use goes up - so overall a solid positive to the language.

[+] sharikous|4 years ago|reply
Maybe this is the right forum to ask... Why the debug system in Julia is so terribly slow? It seems to me that Debug.jl (or whatever runs in VSCode) interprets, rather than running, the code. The result is that for me debugging is just unusable.

The standard way to put breakpoints in an executable is to replace the instruction at which to stop with INT3 (or something analogous in other architectures). Then give the system a callback for your debugger when the CPU receives the interrupt.

Is there a way to make Julia's debugger do that?

[+] KenoFischer|4 years ago|reply
We had a debugger like that a few years ago, but the experience was unsatisfying for people, because you got the "debugging optimized C++ code" experience with unreliable breakpoints and mostly unavailable debug variables. I took the decision to scrap that and instead put out something simple that's slow but robust and reliable. The plan was always to then use the JIT on top of that to create a "debug-specialized" (using statepoints for local variables rather than DWARF) version of the running code, which should give you perfect debuggability at minimal runtime cost, but it's a fair amount of work that nobody has wanted to do yet.

In general, traditional debugging has always taken a bit of a backseat in Julia, because people code is usually decently functional, so they just run it in a Revise loop and write their state dumps directly into the code (you could deride that that printf debugging, but I think it has a bit of a bad rap, particularly in a live-reloading system, where you basically get an execution log of your revising expression on every file update).

There are still cases where a traditional debugger is useful, so I'm hoping someone will take that on at some point, but so far there've been higher priorities eleswhere.

Also do note that you can switch the debugger into compiled mode, which will be faster, but ignore breakpoints.

[+] simeonschaub|4 years ago|reply
Yes, it uses Debugger.jl, which relies on JuliaInterpreter.jl under the hood, so while you can tell the debugger to compile functions in certain modules, it will mostly interpret your code.

You might be interested in https://github.com/JuliaDebug/Infiltrator.jl, which uses an approach more similar to what you describe.

[+] arksingrad|4 years ago|reply
I'm similarly disappointed in Debugger.jl, but I find that Infiltrator.jl often helps me get where I need to go for intra-function problems.
[+] adgjlsfhk1|4 years ago|reply
IMO, the best part about this is that 1.6 is officially the new LTS. Hopefully this finally ends people trying to use 1.0.x which at this point is really sub-par.
[+] StefanKarpinski|4 years ago|reply
It's true that 1.0 feels really old at this point. We're also trying to improve messaging that people should generally not be using LTS unless they work in a really deeply risk-averse organization. Almost everyone should just use the latest release. That messaging should hopefully help make what the LTS release is less important. It just shouldn't matter to most people.
[+] thetwentyone|4 years ago|reply
Been really pleased with Julia and happy about the continued progress. Package speedups on Windows are expecially nice for me in this release.
[+] lukego|4 years ago|reply
Likewise. The core language is pretty amazing and this steady stream of improvements is very impressive and reassuring. Being able to easily install, run, and combine bleeding-edge research tools is fantastic.

I'm really enjoying exploring the probabilistic-programming corner of the Juliaverse and finding it much smoother to get up and running with than Python/R tooling.

[+] dagw|4 years ago|reply
"Package speedups on Windows are expecially nice for me in this release."

This is huge! Despite my best efforts Julia as been practically unusable on Windows. There are lots of people at work who could probably replace Matlab with Julia, but this has been a complete showstopper.

[+] savant_penguin|4 years ago|reply
"Julia v1.7 is also the first release which runs on Apple Silicon, for example the M1 family of ARM CPUs. "

I hope those benchmarks are coming in hot

[+] ChrisRackauckas|4 years ago|reply
>I hope those benchmarks are coming in hot

M1 is extremely good for PDEs because of its large cache lines.

https://github.com/SciML/DiffEqOperators.jl/issues/407#issue...

The JuliaSIMD tools which are internally used for BLAS instead of OpenBLAS and MKL (because they tend to outperform standard BLAS's for the operations we use https://github.com/YingboMa/RecursiveFactorization.jl/pull/2...) also generate good code for M1, so that was giving us some powerful use cases right off the bat even before the heroics allowed C/Fortran compilers to fully work on M1.

[+] LolWolf|4 years ago|reply
Still Tier 3 support, but hopefully it'll be Tier 1 very soon :)

(As far as I know, the community is really working on it! And I'm phenomenally excited)

[+] pepoluan|4 years ago|reply
I love how the Xoshiro PRNG Family is replacing Mersenne Twister more and more.
[+] eigenspace|4 years ago|reply
This release was a long time coming. Very glad it's now here!
[+] logankilpatrick|4 years ago|reply
So much hard work from the whole Julia community, it's great to see the release go live!
[+] tpoacher|4 years ago|reply
I used to love Julia, but it increasingly makes this Koan make more and more sense:

> A martial arts student went to his teacher and said earnestly, “I am devoted to studying your martial system. How long will it take me to master it?” The teacher’s reply was casual, “Ten years.” Impatiently, the student answered,”But I want to master it faster than that. I will work very hard. I will practice everyday, ten or more hours a day if I have to. How long will it take then?” The teacher thought for a moment, “20 years.”

(originally seen in the context of this article: https://brianlui.dog/2020/05/10/beware-of-tight-feedback-loo...)

[+] amkkma|4 years ago|reply
In this analogy, Julia is the student? Or are you and Julia is the martial arts?

I'd be curious to hear more specific critiques if you don't mind

[+] pella|4 years ago|reply
> "We hope to be back in a few months to report

> on even more progress in version 1.8!"

1.8rc1 ?

[+] adgjlsfhk1|4 years ago|reply
We're a ways away from 1.8rc1, but we probably will have a feature freeze for 1.8 soonish (next month or so). Hopefully, 1.8 takes less time to release than 1.7 did.
[+] pjmlp|4 years ago|reply
Lots of nice goodies, quite interesting to follow on Julia's development.
[+] a-dub|4 years ago|reply
time to give it another look!
[+] moelf|4 years ago|reply
try doing this year of Advent of Code in Julia!
[+] whimsicalism|4 years ago|reply
i seem to be the last person in the world to prefer c-style syntax. but so much cool stuff happening in julia that it seems silly to avoid diving in on such a basic semantic nit.