top | item 43447971

(no title)

return_to_monke | 11 months ago

All of these things you mention are "thinking", meaning they require complex algorithms with a bunch of branches and edge cases.

The tasks that GPUs are good at right now - graphics, number crunching, etc - are all very simple algorithms at the core (mostly elementary linear algebra), and the problems are, in most cases, embarassingly parallel.

CPUs are not very good at branching either - see all the effort being put towards getting branch prediction right - but they are way better at it than GPUs. The main appeal of GPGPU programming is, in my opinion, that if you can get the CPU to efficiently divide the larger problem into a lot of small, simple subtasks, you can achieve faster speeds.

You mentioned compilers. See a related example, for reference all the work Daniel Lemire has been doing on SIMD parsing: the algorithms he (co)invented are all highly specialized to the language, and highly nontrivial. Branchless programming requires an entirely different mindset/intuition than "traditional" programming, and I wouldn't expect the average programmer to come up with such novel ideas.

A GPU is a specialized tool that is useful for a particular purpose, not a silver bullet to magically speed up your code. Theree is a reason that we are using it for its current purposes.

discuss

order

No comments yet.