top | item 26396146

(no title)

gtr32x | 5 years ago

I understand that the answer is no here. Because this method is only suitable for the class of linear system thats the equivalent of sparse matrices. GPUs on the other hand are more optimized for the general purpose matrix multiplication here. Unless it can be shown that there are certain economically high-usage scenarios of this class of problems (e.g. the usage magnitude of bitcoin mining), the investment into this specific research does not seem warranted.

discuss

order

sdenton4|5 years ago

Completely ridiculous.

Firstly, faster solutions in fundamental problems can eventually lead to hardware that supports it.

Secondly, this is already happening for sparse matrix multiplication: the nvidia A100 has some sparsity support, to allow making better use of pruned neutral networks, for example.

Thirdly, sparse enough systems, even without the A100, can run faster on cpu than gpu. If you find yourself with one of these problems, you can just choose the correct piece of hardware for the job. Without a sparse algorithm, you are still stuck with the slower dense solution.

Fourthly, giant sparse systems do indeed arise constantly. Just to make one up, consider weather measurements. Each row is a set of measurements from a specific weather station, but there are thousands of stations: it's a sparse set of observations, with some nearby dependencies. Evolving the state in time will often involve solving a giant linear system. (See other comments on the thread about pdes.)

It is absolutely worthwhile research, regardless of how applicable it is to fscking bitcoin.