spacedome's comments

spacedome | 3 years ago | on: How Julia ODE Solve Compile Time Was Reduced from 30 Seconds to 0.1

Thanks for writing this up, Chris!

I took a break from Julia a year or two ago because of some of these issues, one of the big ones being I didn't want to write and maintain a set of non-allocating LAPACK wrappers for iterative solvers, but the memory churn was killing my performance. So, so glad FastLapackInterface and LinerarSolve are a thing now, and the MKL situation is much easier with this trampoline development, makes me want to start working on Julia solvers again.

It does feel difficult to write performant Julia if you don't put a lot of effort to stay "in the know" as a lot of this knowledge is very dispersed, but I guess it makes sense as the language is still changing quite rapidly.

spacedome | 4 years ago | on: What's bad about Julia?

I love Julia and choose to work in it almost exclusively, but I agree with the points in the article. I've run into a lot of issues just writing numerical linear algebra type algorithms.

Even core, and not quite core but maintained by core dev, libraries like Distributed.jl and IterativeSolvers.jl can feel pretty rough. For example IterativeSolvers has had strange type issues and not allowed multiple right hand sides for linear solves, for years, afaik due to some aspects of the type system and some indecision in the linalg interface. DistributedArrays still is very poorly documented and looks like it hasn't been touched in 3 years.

I've run into problems when I need more explicit memory management, for example none of the BLAS/LAPACK routines have interfaces for the work arrays, so you either get reallocation or have to rewrite the ccall wrapper yourself. It can also be hard to tell where the memory allocation is happening.

My most recent problem had been with Distributed and DistributedArrays, where everything is fine if you just want a basic parallel mapreduce, but has been a huge pain past that. It's not even clear to me if Distributed/DistributedArrays has been more or less abandoned in favor of MPI.jl, which for me removes most of the benefit of writing in julia, since you then have to run it through MPI. There is an MPI sort of interface for DistributedArrays but that part is not well documented and looks like more of an afterthought.

My use case isn't even that complex, I just want to persistantly store some matrices across the nodes, run some linear algebra routines on them every iteration and send an update across the nodes, then collect at the end. If anyone has any idea how to do this correctly in Distributed or DistributedArrays or can point me to some examples that would be amazing because it has been taking me forever to piece it together.

Not going to stop using Julia but there are many basic things even just in a scientific computing workflow that still feel like they were rushed and they can really take the wind out of your sails.

spacedome | 5 years ago | on: GameStop drops by 40% in 25 minutes

It looks good, I'll definitely give it a real try if I start trading options more, the order flow data alone looks like it would make it worthwhile. Do you incorporate L2 data? Couldn't find that anywhere, only thing that seems like its missing.

spacedome | 5 years ago | on: GameStop drops by 40% in 25 minutes

You can make big money trading the volitility, if you get really lucky. The price of some puts I looked at went up 500%+ during the drop today. Would not try this personally lol

spacedome | 5 years ago | on: GameStop drops by 40% in 25 minutes

Looking right now some of the options a few months out have implied volitility of 1000%+, your best bet might be selling them off on big crashes instead of waiting it out? Some of the puts I looked at went up 500%+ after this dip

spacedome | 5 years ago | on: Julia adoption keeps climbing

You absolutely can use regular jupyter notebooks for julia! Pluto has some advantages, like being stored as a normal julia file. The julia startup time issues affect both.

spacedome | 5 years ago | on: How much math you need for programming (2014)

You are making a lot of ontological and epistemological assumptions that are contentious in the philosophy of math. Not saying you are wrong in thinking this, metaphysical questions don't necessarily have answers, but many would not agree with you.

spacedome | 5 years ago | on: How much math you need for programming (2014)

I agree this is a common sentiment among mathematicians, but this is a very modern perspective. If you look back 100 years ago to Hilbert, there was less distinction between physicists and mathematicians, much less the pure/applied rift that now exists. Arnol'd (who is referenced above) was one of the mathematicians who tried to keep this unity alive.

spacedome | 5 years ago | on: The Abolition of Work (2002)

As an 'academic' who has done plenty of physical labor, I find this argument reductive and offensive. You can disagree with the author without painting this negative picture of them.

spacedome | 5 years ago | on: Covid forced bookstores online. Can they survive?

On demand printing has also been horrible for textbooks/monographs, Springer being one of the worst. Very few copies are printed, but they serve as an important means of preserving this knowledge, and books that do not survive a single reading do not instill confidence in the longevity of cheap print on demand. They aren't any less expensive now either. I also buy most books used, often to avoid these terrible newer printings.

spacedome | 5 years ago | on: The Accelerating Adoption of Julia

It is entirely possible, but is not trivial, especially for the user who then needs to know "arcane knowledge" of BLAS/LAPACK work array sizes and flags. There was some discussion about this on github, but it sort of trailed off without a real resolution. I think it is considered too complicated/niche to be in base, and was recommended to be an external library, but nobody (myself included) seems particularly interested in what amounts to maintaining a fork of the entire BLAS package. The base devs would have more insight, but this is my view from the outside at least.

spacedome | 5 years ago | on: The Accelerating Adoption of Julia

Yes, many of these are "in-place" but will still allocate. I typically use the geev!/ggev!/geevx! routines, if you look at the source code you will see that the work arrays are still allocated inside the call. The in-place here (unfortunately) means only that the input is overwritten, not that there is no allocation.

spacedome | 5 years ago | on: The Accelerating Adoption of Julia

Higher level BLAS operations, such as solving a linear system, or computing svd/eigen, cannot be done in-place the way matrix multiplication can, and require additional memory of a predetermined, fixed size, called the work array. This cannot be pre-allocated in Julia, as there is no interface to do so in LinearAlgebra, so these BLAS calls will always allocate memory.

spacedome | 5 years ago | on: The Accelerating Adoption of Julia

As much as I like Julia, I think "trivial to write allocation free code" is a bit of an overstatement. Depending on what you are doing, it can be difficult, for example iteratively calling any of the LinearAlgebra methods, since there is no interface for preallocating work arrays (doing the ccall on BLAS yourself is not a fun work-around). It is also not always clear why something is allocating, even with the debugging tools.
page 1