top | item 45137999

(no title)

thetwentyone | 5 months ago

Especially because Julia has pretty user friendly and robust GPU capabilities such as JuliaGPU and Reactant[2] among other generic-Julia-code to GPU options.

1: https://enzymead.github.io/Reactant.jl/dev/ 2: https://enzymead.github.io/Reactant.jl/dev/

discuss

order

jb1991|5 months ago

I get the impression that most of the comments in this thread don't understand what a GPU kernel is. These high-level languages like Python and Julia are not running on the kernel, they are calling into other kernels usually written in C++. The goal is different with Mojo, it says at the top of the article:

> write state of the art kernels

You don't write kernels in Julia.

arbitrandomuser|5 months ago

>You don't write kernels in Julia.

The package https://github.com/JuliaGPU/KernelAbstractions.jl was specifically designed so that julia can be compiled down to kernels.

Julia's is high level yes, but Julia's semantics allow it to be compiled down to machine code without a "runtime interpretter" . This is a core differentiating feature from Python. Julia can be used to write gpu kernels.

ssfrr|5 months ago

It doesn’t make sense to lump python and Julia together in this high-level/low-level split. Julia is like python if numba were built-in - your code gets jit compiled to native code so you can (for example) write for loops to process an array without the interpreter overhead you get with python.

People have used the same infrastructure to allow you to compile Julia code (with restrictions) into GPU kernels

adgjlsfhk1|5 months ago

Julia's GPU stack doesn't compile to C++. it compiles Julia straight to GPU assembly.

pjmlp|5 months ago

See new cu tile architecture on CUDA, designed from the ground up with Python in mind.