Hey folks,
With FLUX.1 Kontext [dev] dropping yesterday, we're comparing prompting it vs a fine-tuned FLUX.1 [dev] and PixArt on generating consistent characters. Besides the comparison, we'll do a deep dive into how Flux works and how to fine-tune it.
What we'll go over:
* Which models performs best on custom character gen.
* Flux's architecture (which is not specified in the Flux paper)
* Generating synthetic data for fine-tuning examples (how many examples you'll need as well)
* Evaluating the model before and after the fine-tuning
* Relevant papers and models that have influenced Flux
* How to set up LoRA effectively
This is part of a new series called Fine-Tune Fridays where we show you how to fine-tune open-source small models and compare them to other fine-tuned models or SOTA foundation models. Hope you can join us later today at 10 AM PST!
https://lu.ma/fine-tuning-friday-3
mathi0750|8 months ago