(no title)
MediaSquirrel | 5 months ago
We were just calling the iPhone's built-in face tracking system via the Vision Framework to animate the avatars. That's the thing that was running on GPU.
MediaSquirrel | 5 months ago
We were just calling the iPhone's built-in face tracking system via the Vision Framework to animate the avatars. That's the thing that was running on GPU.
llm_nerd|5 months ago
That is neither here nor there on CoreML -- which also uses the CPU, GPU, and ANE, and sometimes a combination of all of them -- or the weird thing about MLX.
MediaSquirrel|5 months ago
The only reason to use CoreML these days is to tap into the Neural Engine. When building for CoreML, if one layer of your model isn't compatible with the Neural Engine, it all falls back to the CPU. Ergot, CoreML is the only way to access the ANE, but it's a buggy all-or-nothing gambit.
Have you ever actually shipped a CoreML model or tried to use the ANE?