top | item 40737226

20x Faster Background Removal in the Browser Using ONNX Runtime with WebGPU

165 points| buss_jan | 1 year ago |img.ly | reply

31 comments

order
[+] DaiPlusPlus|1 year ago|reply
Background Removal can be thought of as Foreground Segmentation, inverted. That is no trivial feat; my undergraduate thesis was on segmentation, but using only “mechanical” approaches, no NNs, etc), hence my appreciation!

But here’s something I don’t understand: (And someone please correct me if I’m wrong!) - now I do understand that NNs are to software what FPGAs are to hardware, and the ability to pick any node and mess with it (delete, clone, more connections, less connections, link weights, swap-out the activation functions, etc) means they’re perfect for evolutionary-algorithms that mutate, spawn, and cull these NNs until they solve some problem (e.g. playing Super Mario on a NES (props to Tom7) or in this case, photo background segmentation.

…now, assuming the analogy to FPGAs still holds, with NNs being an incredibly inefficient way to encode and execute steps in a data-processing pipeline (but very efficient at evolving that pipeline) - doesn’t it then mean that whatever process is encoded in the NN, it should both be possible to represent in some more efficient representation (I.e. computer program code, even if it’s highly parallelised) and that “compiling” it down is essential for performance? And if so, then why are models/systems like this being kept in NN form?

(I look forward to revisiting this post a decade from now and musing at my current misconceptions)

[+] johndough|1 year ago|reply
For many tasks that neural networks can solve, there are traditional algorithms that are more compact (lines of source code vs size of neural network parameters), but they are not always faster and often produce results of lower quality. For a fair comparison, you have to compare the quality of result together with the computation time, which is not straightforward since those are two competing goals. That being said, neural networks perform quite well for two reasons:

1. They can produce approximate solutions which are often good enough in practice and faster than exact algorithmic solutions.

2. Neural networks benefit from billions of dollars of research into how to make them run faster, so even if they technically require more TFLOPs to compute, they are still faster than traditional algorithms that are not extremely well optimized.

Lastly, development time is also important. It is much easier to train a neural network on some large dataset than to come up with an algorithm that works for all kinds of edge cases. To be fair, neural networks might fail catastrophically when they encounter data that they have not been trained on, but maybe it is possible to collect more training data for this specific case.

I have not discussed any methods to compress and simplify already trained models here (model distillation, quantization, pruning, low-rank approximation, and probably many more that I've forgotten), but they all tip the scales in favor of neural networks.

[+] TeMPOraL|1 year ago|reply
NNs are, in a way, already "compiled". If all you want to do is inference (forward pass), then you mostly do a lot of matrix multiplications. It's the training pass that requires building up extra scaffolding to track gradients and such.

It occurred to me that NNs ("AI") are indeed a bit like crypto, in the sense that both attempt to substitute compute for some human quality. Proof of Work and associated ideas try to substitute compute for trust[0]. Solving problems by feeding tons of data into a DNN is substituting compute for understanding. Specifically, for our understanding of the problem being solved.

It's neat we can just throw compute at a problem to solve it well, but we then end up with a magic black box that's even less comprehensible than the problem at hand.

It also occurs to me that stochastic gradient descent is better than evolutionary programming because it's to evolution what closed-form analytical solutions are to running a simulation of interacting bodies - if you can get away with a formula that gives you what the simulation is trying to approximate, you're better off with the formula. So in this sense, perhaps it's worth to try harder to take a step back and reverse-engineer the problems solved by DNNs, try to gain that more theoretical understanding, because as fun as brute-forcing a solution is, analytical solutions are better.

--

[0] - Which I consider bad for reasons discussed many time before; it's not where I want to go with this comment.

[+] johndough|1 year ago|reply
Neural networks are not trained with evolutionary algorithms because they are very slow, especially for the millions or billions of parameters that NNs have. Instead, stochastic gradient descent is used for training, which is much more efficient.
[+] eevilspock|1 year ago|reply
> doesn’t it then mean that whatever process is encoded in the NN, it should both be possible to represent in some more efficient representation...?

Not if NNs are complex systems[1] whose useful behavior is emergent[2] and therefore non-reductive[3]. In fact, my belief is that if NNs and therefore also LLMs aren't these things, they can never be the basis for true AI.[4]

---

[1] https://en.wikipedia.org/wiki/Complex_system

[2] https://en.wikipedia.org/wiki/Emergence

[3] https://en.wikipedia.org/wiki/Reductionism, https://www.encyclopedia.com/humanities/encyclopedias-almana..., https://academic.oup.com/edited-volume/34519/chapter-abstrac...

[4] Though being these things doesn't guarantee that they can be the basis for true AI either. It's a minimum requirement.

[+] forgotusername6|1 year ago|reply
"Therefore, the first run of the network will take ~300 ms and consecutive runs will be ~100 ms"

I only skimmed the article, but I don't think they mention the size of the image. 100ms is not that impressive when you consider that you need to be three times as fast for acceptable video frame rate.

[+] diggan|1 year ago|reply
> I only skimmed the article, but I don't think they mention the size of the image. 100ms is not that impressive when you consider that you need to be three times as fast for acceptable video frame rate.

You don't need three times as fast for acceptable video frame rates in a video editor, you need a system that allows you to cache "rendered" frames so when the user does an edit, it renders to this cache, then once done, the user can play it back in real-time.

This is essentially how all video editors handle edits on clips/video today. Some effects/edits can be applied in real-time, but the more advanced one (I'd say background removal being one of them) works with this type of caching system.

[+] pjmlp|1 year ago|reply
As long as one uses a Chrome distribution.

WebGPU is at least one year away of becoming usable for cross browser deployment.

[+] diggan|1 year ago|reply
> WebGPU is at least one year away of becoming usable for cross browser deployment.

In Firefox it seems to be behind a feature flag and Safari seems to have it in it's "Technology Preview" (some sort of release candidate?), so seems closer that I at least though.

[+] tlarkworthy|1 year ago|reply
Onnx is cool, the other option is tensorflow js which I have found quite nice as a usable matrix lib for JS with shockingly good perf.would love to know how well they compare
[+] dleeftink|1 year ago|reply
Also shout out to Taichi and GPU.js for alternatives in this space. I've also had success with Hamster.js, that 'parallelizes' computations using Web workers instead of the GPU (who knows, in the future the two might be combined?).
[+] wruza|1 year ago|reply
Interesting, there’s also node version in /packages.
[+] wruza|1 year ago|reply
Tried it, and it's absolutely half-baked. Doesn't accept its own config typed param, messes up with own internal urls, cannot run from non-project dir.

Although the segmentation quality is much better than that of `rembg`, the interface to it is just unamazing. Update: nope, it's sharper, but fails at different images at the same rate.

gist: https://gist.github.com/sou-long/5c7cfee57f5399918c9072552af... (adapted from a real project, just for reference)

[+] jvdvegt|1 year ago|reply
MS teams does this already, right? (I assume they do, as it didn't work in Firefox until recently)

Or do they do it server side?

[+] afro88|1 year ago|reply
I'm pretty sure they do it client side. The latency on your video preview is non existent.
[+] Naira_Nicol|1 year ago|reply

[deleted]

[+] adzm|1 year ago|reply
this sounds like an LLM for sure.
[+] tommek4077|1 year ago|reply
If I run it in a browser on my client, why going to a website in the first place?
[+] jazzyjackson|1 year ago|reply
to resolve a short url to a piece of software i guess