There are multiple apps in the App Store that do this. I spent last year implementing pose detection in an exercise app and we used both Apple’s pose detection and a 3rd party’s. The pose (each point of the human form) itself was sent to a machine learning backend at around 30 fps, analyzed, and data returned at about the same speed using gRPC. Each exercise had a set of specific feedback for both positioning (“Stand facing the camera with you arms at your side/Stand sideways to the camera/etc”) and form correction (“Raise your right arm higher above your head etc”). Feedback was spoken out loud to the user and there was a relatively complex set of rules governing which feedback got priority and how often feedback was spoken. I also implemented an on-screen “skeleton” of the user’s human form points that rendered on top of the camera view. Pretty fun project from a tech point of view.
supermatt|2 years ago
mynegation|2 years ago
anonymoose4|2 years ago
paisawalla|2 years ago
smugma|2 years ago