top | item 22845178

(no title)

bigmit37 | 5 years ago

I was looking around for a language to write my own versions of convolution layers or LTSM or various other ideas I have. I thought I would have to learn c++ and CUDA, which from what I hear would take a lot of time. Is this difficult in Julia If I would go through some courses and learn the basics of Julia?

This would really give me some incentive to learn the language.

discuss

order

ChrisRackauckas|5 years ago

You could just use LoopVectorization on the CPU side. It's been shown to match well-tuned C++ BLAS implementations, for example with the pure Julia Gaius.jl (https://github.com/MasonProtter/Gaius.jl), so you can follow that as an example for getting BLAS-speed CPU side kernels. For the GPU side, there's CUDAnative.jl and KernelAbstractions.jl, and indeed benchmarks from NVIDIA show that it at least rivals directly writing CUDA (https://devblogs.nvidia.com/gpu-computing-julia-programming-...), so you won't be missing anything just by learning Julia and sticking to using just Julia for researching new kernel implementations.

p1esk|5 years ago

In that benchmark, was Julia tested against CuDNN accelerated neural network CUDA code? If not, is it possible (and beneficial) to call CuDNN functions from Julia?