top | item 36345508

(no title)

schreiaj | 2 years ago

What should we run it on if we want to deploy it on mobile systems with limited access to network and extremely restricted power budgets?

For a hobby project I'm trying to find a solution - Power budget for multiple is sub 200w. Need to run inference on a lower resolution video stream (or multiple, that be nice) to do object detection. Cost is a factor because I need to have multiple angles to determine where in relation to a mobile platform the target object is. I'm looking at the Coral.ai board because RPi like boards lack the ability to do ML tasks at reasonable FPS and NVidia seems to have abandoned the lower cost side of the market since the Jetson Nanos seem to be less and less available. (Not that Coral.ai boards are available at all...)

discuss

order

Eisenstein|2 years ago

Check out the Luxonis Oak products. I use an Oak-1 Lite to do real time 2 stage object detection and recognition (~23FPS at 1080P inference on device with two custom yolov5n models). With a bit of python and a Pi (or a Rock64 or similar) you can get it up and running in a day. They also have a decent community and are actively developing the API/SDK and hardware.

schreiaj|2 years ago

Thanks, I've got one of their depth cameras that's been ok. I didn't realize they'd expanded their line so much. Glad to hear about the API/SDK improving, last time I mucked with it a year or so ago it seemed like it was underdeveloped.

Going to have to dig into the sensors they use - had passable luck with non ML tasks using dirt cheap camera modules from laptops running at low resolutions right up until I started moving the cameras at all and then it became a blurry mess because they were so small their exposure times were high. (I'm trying to also avoid having to put a bunch of illumination near the cameras so it doesn't entirely look like a biblically accurate angel)