top | item 40224937

TORAX is a differentiable tokamak core transport simulator

105 points| yeldarb | 1 year ago |github.com

21 comments

order
[+] heisenzombie|1 year ago|reply
I recently started using JAX for some ion-optics work in accelerator physics. I have found it very very good. The autodiff stuff is magical for doing optimisation work, but even just as a compiled-numpy, I have found it very easy to get highly performant code. For reference, I previously tried roughly the same thing in “numba”, and wasn’t able to get anywhere near the same performance as JAX, even running on the CPU, which I understand is JAX’s weakest backend. By and large I have just written basically idiomatic Python/numpy code — sprinkled a few “vmap”s and “scan”s around, and got great results. I’m very pleased with JAX.
[+] Iwan-Zotow|1 year ago|reply
have any code on github? remembering my acc background, would be interesting to see...
[+] aqme28|1 year ago|reply
Very interesting that it's coming from Google. I did my masters in tokamak simulation, so my first question is about performance. Python is very rarely used in this space just for performance reasons. Even though Python can call out to BLAS or whatever, it's still usually worth it to code in Fortran or C or maybe Julia.
[+] nestorD|1 year ago|reply
I am doing quite a bit of work with JAX (the Python library used here) in a high-performance numerical computing context.

On GPU/TPU, it is not going to reach perfect 100% hardware usage, but it is going to get close enough (far above vanilla Python performance) and be significantly more productive than alternatives.

That makes it a sweet spot for research (where you will want to tweak things as you go) and extremely complex codes (where you already need to put your full focus on the correctness of the code). I highly recommend it to domain experts who need performance for their research project.

[+] 317070|1 year ago|reply
> Even though Python can call out to BLAS or whatever, it's still usually worth it to code in Fortran or C or maybe Julia.

I write in a mix of C++ and Python, and have also dabbled with tokamaks, and I think this is a (common) misunderstanding.

Fundamentally, you are optimizing the speed of project progress. Now, if you don't need your results in real time, e.g. because you are building a simulator which is not in a control loop for instance, you are often better at taking the easy language and ignoring the compute performance of the language. The compute performance of languages is a fixed multiplier, and with numerical code you might see things up to 10x.

But having readable code, which is easy to manipulate and change to test ideas, is speeding up the project progress by such a large factor, that it is hard to keep up with other languages. The reason python is omnipresent in machine learning, is not for a lack of trying of other languages. Python is just very good at allowing you to keep up with a fast-moving field.

[+] uoaei|1 year ago|reply
It's built on JAX, not vanilla Python.

The metric being optimized is not just performance, but also the ability to build reasonably performant workflows with arbitrary differentiable (i.e., ML) inputs and outputs.

[+] cokernel_hacker|1 year ago|reply
This python actually builds a graph under the hood which then gets JIT compiled for CPU/GPU/TPU.
[+] yeldarb|1 year ago|reply
Found this really cool; I didn’t even know Deepmind was working on Fusion research https://www.wired.com/story/deepmind-ai-nuclear-fusion/
[+] mptest|1 year ago|reply
There's a video a while back I saw where they had a model that could predict instability in plasma before it happened, allowing operators to turn off the machine before the out of control plasma hurts something. [0]

https://youtu.be/4VD_DLPQJBU

[+] exabrial|1 year ago|reply
I'll be honest, I don't think I understood a word of that.
[+] parentheses|1 year ago|reply
Cool project. I would love to explore simulation projects like this but often don't know where to begin. It's partly because the domains are so foreign to me.