(no title)
shrinks99 | 4 months ago
Apple has been doing on-device machine learning for portrait blurs and depth estimation for years now, though based on the UI, this might use cloud inference as well.
Granted, these aren't the super heavy ones like generative fill / editing, and I understand that cloud inference isn't cheap. A subscription for cloud-based ML features is something I'd find acceptable, and today that's what has launched... The real question is what they plan to do with this in 2-5 years. Will more non-"AI" features make their way into the pro tier? Only time will tell!
No comments yet.