I’m also interested in peoples’ experience. I’d expect decent performance: Whisper 3 has many model sizes, down to 35Mb, iirc. Training, and especially inference, should be doable on a Pi5.
How would this actually work in practice? Do I ask the user to utter specific words then train on that? How is it different from the traditional speech recognition that I need to 'train' to work better on my voice?
The Holy Grail would be to train the model while using it, without any friction. I don't think these methods support that though.
One of the Flower maintainers here. The code example is primarily meant as a demonstrator to show that it's possible to fine-tune these models in a federated way on devices as small as a Raspberry Pi 5.
The bigger takeaway is that we're close to being able to train/fine-tune models with much better performance by accessing vastly more data on the edge, in a federated way.
This is cool. This might be a silly question, but what are the scenarios where it's useful for fine-tuning on the edge with small devices? I get inference on the edge, and curious about metrics on that for Whisper, but isn't it better to fine-tune on beefier infrastructure and then deploy it for inference on the edge?
The big opportunity on the edge is access to more data. Especially with the rise of end-to-end encryption, applications will be able to use more (and more diverse) data on the edge to get better model performance. It's generally true that training on beefier infrastructure is easier, but in the long run, nothing can beat access to better data. And edge hardware has gotten a lot faster over the last few years.
It seems like one benefit of fine tuning on the edge is the data doesn't need to move around as much. My father taught me "don't move a pile of dirt twice", so maybe it is like that.
yeah. with FL it should be possible to make sense out of all data that is distributed across devices without ever having to move it to a central location (i.e. collect it). In the case of speech data, users participating in a federated setting would likely come from different backgrounds, which could be reflected in their accent or use of language.
filterfiber|2 years ago
coder543|2 years ago
I don’t think you’re going to have a good time running the large model on a Pi of any kind.
The large models are 32x slower than the tiny models, roughly.[0]
I just tested, and whisper.cpp on my Pi 4 can transcribe the 30-second a13.wav sample (“make samples” to fetch it) in 18.5 seconds.
You can do the math… 32x = 10 minutes transcribe 30 seconds of audio with the large model. Not a good time for most people.
The Pi 5 could be 2x to 3x faster.
[0]: https://github.com/openai/whisper/blob/main/README.md#availa...
danieljanes|2 years ago
a_wild_dandan|2 years ago
ulnarkressty|2 years ago
The Holy Grail would be to train the model while using it, without any friction. I don't think these methods support that though.
danieljanes|2 years ago
The bigger takeaway is that we're close to being able to train/fine-tune models with much better performance by accessing vastly more data on the edge, in a federated way.
lfmunoz4|2 years ago
saqadri|2 years ago
danieljanes|2 years ago
triyambakam|2 years ago
FL33TW00D|2 years ago
Havoc|2 years ago
jafermarq|2 years ago