Unfortunately, this feature is (still) stuck behind an origin trial and requires serving three different WebAssembly binaries to get correct fallback behavior across different browsers.
Feature detection for WebAssembly[0] is stuck in spec discussions, and SIMD general availability is blocked on either that or its own mechanism for backwards compatibility[1].
The issue is that a WebAssembly binary that contains instructions unknown to the engine (e.g. SIMD instructions not supported by a particular engine) won't validate, even if the functions aren't used at runtime. The only way to work around this is to compile your binary NxMx... times and detect which feature set is supported before loading a binary. It's a real pain in the tail when trying to support new WebAssembly features.
e.g. check out this snippet from canvas.apps.chrome which supports WebAssembly threads on Chrome with a non-thread fallback for e.g. mobile / Firefox:
var X;
try {
X = (new WebAssembly.Memory({
initial: 1,
maximum: 1,
shared: !0
})).buffer instanceof SharedArrayBuffer ? !0 : !1
} catch (a) {
X = !1
}
var ua = r(X ? ["js/threads/ink.js", "defines_threads.js"] : ["js/nothreads/ink.js", "defines.js"])
, va = ua.next().value
, wa = ua.next().value;
Note that these are light models that are designed to be run quickly on a cpu with batch size 1. It's not that uncommon to see multithreaded cpu code beat the gpu in that setting also for other backends.
One of the advantages of using the CPU rather than GPU for inference (especially with batch size 1) is that it doesn't need data transfer from host to device, which is a notoriously slow, asynchronous process. This could also explain the difference in total run time, if measured correctly.
Even WebGL2 doesn't expose compute shaders, so any NN computations work by abusing the graphics pipeline, with many inefficiencies involved. Shader dispatch is more expensive, no access to local memory, no control over dispatch blocks. Hopefully the upcoming WebGPU specification will close these efficiency gaps.
As for traditional TensorFlow, the easiest way we found to improve performance (easily 2x) was to find/create builds tailored to our machines. Using Python, we had prebuilt wheels, which have (understandably) low feature requirements. If you find/build your own (e.g. if you have AVX-512), you can easily get pretty detect performance gains.
(Yes, there are unofficial wheels for various CPUs, but, not sure if that passes your security requirements.)
What's the use-case for tensorflow on web/mobile web? I thought tensorflow was mostly for training models, and my assumption would be that this is mostly relevant for the server/workstation context, but maybe I'm missing something
wffurr|5 years ago
Feature detection for WebAssembly[0] is stuck in spec discussions, and SIMD general availability is blocked on either that or its own mechanism for backwards compatibility[1].
The issue is that a WebAssembly binary that contains instructions unknown to the engine (e.g. SIMD instructions not supported by a particular engine) won't validate, even if the functions aren't used at runtime. The only way to work around this is to compile your binary NxMx... times and detect which feature set is supported before loading a binary. It's a real pain in the tail when trying to support new WebAssembly features.
e.g. check out this snippet from canvas.apps.chrome which supports WebAssembly threads on Chrome with a non-thread fallback for e.g. mobile / Firefox:
[0]: https://github.com/WebAssembly/conditional-sections [1]: https://github.com/WebAssembly/simd/issues/356etaioinshrdlu|5 years ago
If the CPU is really faster than the GPU, that really demonstrates how inefficient the WebGL backend really is, compared to something like CUDA.
tsbinz|5 years ago
SimplyUnknown|5 years ago
Marat_Dukhan|5 years ago
drej|5 years ago
(Yes, there are unofficial wheels for various CPUs, but, not sure if that passes your security requirements.)
tpetry|5 years ago
dzhiurgis|5 years ago
skohan|5 years ago
ajtulloch|5 years ago
The_rationalist|5 years ago