Not discrediting the article however is this partially incorrect? Most of these embedded Linux devices that are “intelligent” doorbells, that is run inference locally vs cloud compute, can’t locally build models however can run inference. So they must send new facial images of people they want to identify to a cloud solution to do so?Long day, not sure if words are my friend right now…
clairity|3 years ago
rapjr9|3 years ago
https://www.macrumors.com/2022/11/29/eufy-camera-cloud-uploa...
"With regard to eufy Security’s facial recognition technology, this is all processed and stored locally on the user's device."
Running a simple face recognition algorithm locally seems entirely possible. Just measure distance between the eyes, distance to nose, etc., and keep a local database of all the faces encountered. Most of the time the basestation is not doing much except storing the video stream, so it may have enough spare capacity (say capacity that would otherwise be used to stream video to a phone) to also train an AI in the background, even if it takes 30 minutes per face. Whether that database/training gets sent back to eufy and is added to a global face recognition algorithm is a different matter. Running an AI can require very few resources depending on the algorithm. Training takes more resources but can be run over the long term. Smartphones are perfectly capable of running and training a variety of ML algorithms and a basestation that can handle many video streams is likely to be more powerful than a smartphone. They might even outsource the computation to your phone.
T3OU-736|3 years ago
Nest cameras claim to do "ML at the edge", and then there are devices like nVidia's Jetson Nano. I strongly believe that facial rec without going to the cloud is possible with current tech.
Guessing here, but I can see propagating those ML models, where one camera learns of the face and it is sent out to other cameras in the same "security system" logical unit as a reasonable thing.
Tagbert|3 years ago