(no title)
thewataccount | 2 years ago
Even if it was the same seed, from my understanding Dalle3 would have to be just a further trained version of the same checkpoint to even resemble Dalle2's image. Like stable diffusion 1.4 vs 1.5 and 2.0 and 2.1 will make identifiably similar images, but 1.5 vs 2.1 vs SDXL won't look remotely similar.
Even more so because I'd wager they changed their encoder and/or decoder too.
* I think that if they generated something like a controlnet for guidance the same way in both models then they might be comparable but from my understanding Dalle2 doesn't work that way at all.
Comparisons would still be interesting though!
davidbarker|2 years ago
I guess it'll just have to be comparisons of the general concepts. It'll be good to see the change in understanding of the prompt and the change in image detail.
If anyone at OpenAI wants to give me early access to give me a head-start… smiles
thewataccount|2 years ago