top | item 24892768

Supercharging TensorFlow.js with SIMD and multi-threading

63 points| Marat_Dukhan | 5 years ago |blog.tensorflow.org

19 comments

order

wffurr|5 years ago

Unfortunately, this feature is (still) stuck behind an origin trial and requires serving three different WebAssembly binaries to get correct fallback behavior across different browsers.

Feature detection for WebAssembly[0] is stuck in spec discussions, and SIMD general availability is blocked on either that or its own mechanism for backwards compatibility[1].

The issue is that a WebAssembly binary that contains instructions unknown to the engine (e.g. SIMD instructions not supported by a particular engine) won't validate, even if the functions aren't used at runtime. The only way to work around this is to compile your binary NxMx... times and detect which feature set is supported before loading a binary. It's a real pain in the tail when trying to support new WebAssembly features.

e.g. check out this snippet from canvas.apps.chrome which supports WebAssembly threads on Chrome with a non-thread fallback for e.g. mobile / Firefox:

        var X;
        try {
            X = (new WebAssembly.Memory({
                initial: 1,
                maximum: 1,
                shared: !0
            })).buffer instanceof SharedArrayBuffer ? !0 : !1
        } catch (a) {
            X = !1
        }
        var ua = r(X ? ["js/threads/ink.js", "defines_threads.js"] : ["js/nothreads/ink.js", "defines.js"])
          , va = ua.next().value
          , wa = ua.next().value;
[0]: https://github.com/WebAssembly/conditional-sections [1]: https://github.com/WebAssembly/simd/issues/356

etaioinshrdlu|5 years ago

If I read this right, this is much faster than the WebGL backend on the devices tested.

If the CPU is really faster than the GPU, that really demonstrates how inefficient the WebGL backend really is, compared to something like CUDA.

tsbinz|5 years ago

Note that these are light models that are designed to be run quickly on a cpu with batch size 1. It's not that uncommon to see multithreaded cpu code beat the gpu in that setting also for other backends.

SimplyUnknown|5 years ago

One of the advantages of using the CPU rather than GPU for inference (especially with batch size 1) is that it doesn't need data transfer from host to device, which is a notoriously slow, asynchronous process. This could also explain the difference in total run time, if measured correctly.

Marat_Dukhan|5 years ago

Even WebGL2 doesn't expose compute shaders, so any NN computations work by abusing the graphics pipeline, with many inefficiencies involved. Shader dispatch is more expensive, no access to local memory, no control over dispatch blocks. Hopefully the upcoming WebGPU specification will close these efficiency gaps.

drej|5 years ago

As for traditional TensorFlow, the easiest way we found to improve performance (easily 2x) was to find/create builds tailored to our machines. Using Python, we had prebuilt wheels, which have (understandably) low feature requirements. If you find/build your own (e.g. if you have AVX-512), you can easily get pretty detect performance gains.

(Yes, there are unofficial wheels for various CPUs, but, not sure if that passes your security requirements.)

dzhiurgis|5 years ago

28ms on 2018 iPhone without threads or SIMD, 24ms on Chrome MBP 2019 with threads and no SIMD, 11ms with SIMD.

skohan|5 years ago

What's the use-case for tensorflow on web/mobile web? I thought tensorflow was mostly for training models, and my assumption would be that this is mostly relevant for the server/workstation context, but maybe I'm missing something

The_rationalist|5 years ago

Couldn't tensorflow leverage webgl / webgpu? Also it's really sad that there no webCL adoption yet