top | item 37588381

(no title)

thewataccount | 2 years ago

> Because there's no way to control the seed, a direct comparison (using a before/after slider, for example) probably wouldn't make sense.

Even if it was the same seed, from my understanding Dalle3 would have to be just a further trained version of the same checkpoint to even resemble Dalle2's image. Like stable diffusion 1.4 vs 1.5 and 2.0 and 2.1 will make identifiably similar images, but 1.5 vs 2.1 vs SDXL won't look remotely similar.

Even more so because I'd wager they changed their encoder and/or decoder too.

* I think that if they generated something like a controlnet for guidance the same way in both models then they might be comparable but from my understanding Dalle2 doesn't work that way at all.

Comparisons would still be interesting though!

discuss

order

davidbarker|2 years ago

I think you're right — I thought about it a little more after I replied.

I guess it'll just have to be comparisons of the general concepts. It'll be good to see the change in understanding of the prompt and the change in image detail.

If anyone at OpenAI wants to give me early access to give me a head-start… smiles

thewataccount|2 years ago

Yeah I'll be interested to see how much you have to change the prompts to get similar styles