top | item 44798622

(no title)

ljchen | 6 months ago

Interesing idea. I am wondering what are the use cases on top of your head? I am asking because in my understanding people who care concurrency and parallelism are often those who care performance.

discuss

order

death_eternal|6 months ago

Like I said, the use case is heavy numerical workloads with, e.g. dataframes, in a context where the data is too big for something like python to handle. Using Nim for this is quite difficult too due to value unboxing overhead. It is easier to optimize for things like cache locality and avoid unnecessary allocations using this tool.

cb321|6 months ago

I wouldn't reply except you mentioned this unboxing twice and I think people might get the wrong idea. I almost never use ref T/ref object in Nim. I find if you annotate functions at the C level gcc can even autovectorize a lot, like all four central moments: https://github.com/c-blake/adix/blob/3bee09a24313f8d92c185c9... - the relevance being that redoing your own BLAS level 1/2 stuff is really not so bad if the backend is autovectorizing it for you, and full SVD/matrix multiplying/factorizing can be a lot of work. Anyway, as per the link, the Nim itself is just straight, idiomatic Nim.

Parallel/threads is some whole other can of worms, of course. It is unfortunate that the stdlib is weak, both here and for numerics, and for other things, and that people are as dependency allergic as in C culture.

Anyway, "easier to optimize" is often subjective, and I don't mean to discourage you.