top | item 38294203

Federated finetuning of Whisper on Raspberry Pi 5

90 points| danieljanes | 2 years ago |flower.dev

20 comments

order

filterfiber|2 years ago

I don't think the article mentions it, how well does the rpi 4 and 5 do for inference with whisper especially v3?

coder543|2 years ago

v3 only comes in one flavor: large.

I don’t think you’re going to have a good time running the large model on a Pi of any kind.

The large models are 32x slower than the tiny models, roughly.[0]

I just tested, and whisper.cpp on my Pi 4 can transcribe the 30-second a13.wav sample (“make samples” to fetch it) in 18.5 seconds.

You can do the math… 32x = 10 minutes transcribe 30 seconds of audio with the large model. Not a good time for most people.

The Pi 5 could be 2x to 3x faster.

[0]: https://github.com/openai/whisper/blob/main/README.md#availa...

danieljanes|2 years ago

One of the Flower maintainers here, we're planning to follow up with a more in-depth performance comparison soon

a_wild_dandan|2 years ago

I’m also interested in peoples’ experience. I’d expect decent performance: Whisper 3 has many model sizes, down to 35Mb, iirc. Training, and especially inference, should be doable on a Pi5.

ulnarkressty|2 years ago

How would this actually work in practice? Do I ask the user to utter specific words then train on that? How is it different from the traditional speech recognition that I need to 'train' to work better on my voice?

The Holy Grail would be to train the model while using it, without any friction. I don't think these methods support that though.

danieljanes|2 years ago

One of the Flower maintainers here. The code example is primarily meant as a demonstrator to show that it's possible to fine-tune these models in a federated way on devices as small as a Raspberry Pi 5.

The bigger takeaway is that we're close to being able to train/fine-tune models with much better performance by accessing vastly more data on the edge, in a federated way.

lfmunoz4|2 years ago

The device on the edge creates the data but must also label it, right?

saqadri|2 years ago

This is cool. This might be a silly question, but what are the scenarios where it's useful for fine-tuning on the edge with small devices? I get inference on the edge, and curious about metrics on that for Whisper, but isn't it better to fine-tune on beefier infrastructure and then deploy it for inference on the edge?

danieljanes|2 years ago

The big opportunity on the edge is access to more data. Especially with the rise of end-to-end encryption, applications will be able to use more (and more diverse) data on the edge to get better model performance. It's generally true that training on beefier infrastructure is easier, but in the long run, nothing can beat access to better data. And edge hardware has gotten a lot faster over the last few years.

triyambakam|2 years ago

It seems like one benefit of fine tuning on the edge is the data doesn't need to move around as much. My father taught me "don't move a pile of dirt twice", so maybe it is like that.

FL33TW00D|2 years ago

Imagine fine tuning a personal LORA on the end users data. No privacy headaches but all the personalization.

Havoc|2 years ago

I’m guessing this will also help with thick accents?

jafermarq|2 years ago

yeah. with FL it should be possible to make sense out of all data that is distributed across devices without ever having to move it to a central location (i.e. collect it). In the case of speech data, users participating in a federated setting would likely come from different backgrounds, which could be reflected in their accent or use of language.