There are different versions of the parakeet model. The 8-bit quantized version doesn't use as many bits. Thus it saves space (only using about 600MB) while maintaining about the same level of accuracy.
I think most apps that use Parakeet tend to use this version of the model?
Leftium|2 days ago
I think most apps that use Parakeet tend to use this version of the model?
See if Parakeet (Nemotron) still uses 4GB+ with my implementation: https://rift-transcription.vercel.app/local-setup