top | item 39016064

(no title)

furtiman | 2 years ago

Another take from us at Edge Impulse at explaining TinyML / Edge ML in our docs: https://docs.edgeimpulse.com/docs/concepts/what-is-embedded-...

We have built a platform to build ML models and deploy it to edge devices from cortex M3s to Nvidia Jetsons to your computer (we can even run in WASM!)

You can create an account and build a keyword spotting model from your phone and run in WASM directly https://edgeimpulse.com

Now another key thing that drives the Edge ML adoption is the arrival of the embedded accelerator ASICs / NPUs / e.g. that dramatically speed up computation with extremely low power - e.g. the Brainchip Akida neuromorphic co-processors [1]

Depending on the target device the runtime that Edge Impulse supports anything from conventional TFLite to NVIDIA TensorRT, Brainchip Akida, Renesas DRP-AI, MemryX, Texas Instruments TIDL (ONNX / TFLite), TensaiFlow, EON (Edge Impulse own runtime), etc.

[1] https://brainchip.com/neuromorphic-chip-maker-takes-aim-at-t...

[Edit]: added runtimes / accelerators

discuss

order

moh_maya|2 years ago

I tried your platform for some experiments using an arduino and it was a breeze, and an absolute treat to work with.

The platform documentation and support is excellent.

Thank you for developing it and offering it, along with documentation, to enable folks like me (who are not coders, but understand some coding) to test and explore :)

furtiman|2 years ago

This is amazing to hear! Good luck with any other project you're gonna build next!

I can recommend checking out building for more different hardware targets - there is a lot of interesting chips that can take advantage of Edge ML and are awesome to work with

KingFelix|2 years ago

What sort of experiments did you do? I will go through some of the docs to test out on an arduino as well, would be cool to see what others have done!