(no title)
furtiman | 2 years ago
We have built a platform to build ML models and deploy it to edge devices from cortex M3s to Nvidia Jetsons to your computer (we can even run in WASM!)
You can create an account and build a keyword spotting model from your phone and run in WASM directly https://edgeimpulse.com
Now another key thing that drives the Edge ML adoption is the arrival of the embedded accelerator ASICs / NPUs / e.g. that dramatically speed up computation with extremely low power - e.g. the Brainchip Akida neuromorphic co-processors [1]
Depending on the target device the runtime that Edge Impulse supports anything from conventional TFLite to NVIDIA TensorRT, Brainchip Akida, Renesas DRP-AI, MemryX, Texas Instruments TIDL (ONNX / TFLite), TensaiFlow, EON (Edge Impulse own runtime), etc.
[1] https://brainchip.com/neuromorphic-chip-maker-takes-aim-at-t...
[Edit]: added runtimes / accelerators
moh_maya|2 years ago
The platform documentation and support is excellent.
Thank you for developing it and offering it, along with documentation, to enable folks like me (who are not coders, but understand some coding) to test and explore :)
furtiman|2 years ago
I can recommend checking out building for more different hardware targets - there is a lot of interesting chips that can take advantage of Edge ML and are awesome to work with
KingFelix|2 years ago