(no title)
keldaris | 1 year ago
I've written very little Rust myself, but when I've tried, I've always come away with a similar impression that it's just not a good fit for performant numerical computing, with seemingly basic things (like proper SIMD support, const generics without weird restrictions, etc.) considered afterthoughts. For those more up to speed on Rust development, is this impression accurate, or have I missed something and should reconsider my view?
adgjlsfhk1|1 year ago
gauge_field|1 year ago
Regarding SIMD support, the only thing that is missing, is stable support for avx512, and some more exotic feature extensions for deep learning e.g. avx_vnni. Those are implemented and waiting to be included in the next stable versions.
Gpu support: this is still an issue b/c of not enough people working on it, but there projects trying to improve this: see https://github.com/tracel-ai/cubecl .
Const generics: Yeah, there are a few annoying issues: it is limited to small set of types. For instance, you cant use const enum as a generic. Also, you cant use generic parameters in const operations on stable rust: see unstable feature generic_const_exprs.
My main reason for using rust in numerical computing:
- type system. Some find it weird. I find it explicit and easier to understand.
- cargo (nicer cross platform defaults, since I tend to develop both from windows and linux)
- unconditional code generation, with [target_feature(enable = "feature_list")]. This makes it so that I dont have to set different set of flags for each compilation unit when building. It is enough to put that on top of function making use of SIMD.
I agree that if you want to be fast/exploratory in developing algo and you can sacrifice a little bit of performance, Julia is a better choice.
bobmcnamara|1 year ago