I’ve been playing with diffusion a ton for the past few months, writing a new sampler that implements an iterative blending technique described in a recent paper. The latent space is rich in semantic information, so it can be a great place to apply various transformations rather than operating on the image directly. Yet it still has a significant spatial component, so things you do in one spatial area will affect that same area of the image.
Stable Diffusion 1.5 may be quite old now, but it is an incredibly rich model that still yields shockingly good results. SDXL is newer and more high tech, but it’s not a revolutionary improvement. It can be less malleable than the older model and harder to work with to achieve a given desired result.
> It can be less malleable than the older model and harder to work with to achieve a given desired result.
That has been my experience as well. It's frustrating because SDXL can be exquisite, but SD 1.5 is more "fun" to work with and more creative. I can throw random ideas into a mish-mash of a prompt and SD 1.5 will output an array of interesting things while SDXL will just seem to fall back to something "reasonable", ignoring anything "weird" in the prompt. SDXL also seems to have a lot more position bias in the prompt. SD 1.5 had a bit of that, paying more attention to words earlier in the prompt, but SDXL takes that to a new level.
But SDXL can draw hands consistently, so ... it's a tough choice.
Entered this thread to write your comment. I find SDXL inferior to 1.5 and yes, much harder to work with.
My another issue is that sdxl images that you can see on the web always have that “from a movie/ads”-?ish? coating. Can’t explain it, but it feels even more uncanny than 1.5.
SDXL is too resource-hungry for what it produces. 3x+ model sizes, 12GB vram is barely enough for it, 40 steps is the minimum, and I don’t think training loras will turn out feasible at all. I can’t lower the resolution without distortions, and even proportions are hard to deal with. It feels much less flexible than 1.5 in this regard.
Just a terminology comment here. "Latent space" means a lot of different things in different models. For a GAN for example it actually means the "top concept" space where you can change the entire concept of the image by moving around in the latent space, which is notoriously difficult. For SD/SDXL it refers to the bottommost layer just above pixelspace, which expands the generated image from 64x64 to 512x512 pixels in the case of SD1.5.
This allows the rest of the network to be smaller while still generating a usable output resolution, so it's a performance "hack".
It's a really good idea to explore it and hack into it like in the article, to "remaster" the image so to speak!
Anyone know if the work shown here has been implemented in Automatic1111 or ComfyUI as an extension? If not, than that might be my first project to add since these are quite simple (relatively speaking) in the code to implement.
I think there might be an opinion that since most colour space conversions can be expressed with relatively small neural nets (since they are mostly accumulations of variously scaled values), the autoencoder can dedicate a negligible proportion of its parameters towards that job and that gives it the potential to choose whatever color space training dictates.
I'm not entirely convinced by this idea myself. I have seen a few networks where a range of -1..1 inputs do a lot better than inputs in the range of 0..2 even though translation should be an easyish step for the network to figure out. The benefit from preprocessing the inputs, to me seem to be more advantageous than my common sense tells me it should be.
The format isn't explicit to the network. But the data trained on is usually in RGB format, so probably the reasoning. I found a repo where someone tried different formats but it's wroth noting that this was for discrimination so just because it can discriminate doesn't mean it does the same thing. Maybe I'll run some experiments. You could use a UNet for classification and then look at the bottom layer and do the same thing. Be hard to do with SD (or SDXL) because you'd need to retrain with the format. Tuning could possibly work but the network would likely be biased to understand the RGB encoding.
I don't think it's as simple as this naive approach suggests, but it's a good preliminary analysis. It's a good lesson that while being absolutely correct might be quite difficult, diving in and having a go might get you further than you think.
It's only patterns and textures for 8x8 images, so I guess it could make sense, you're not going to need every conceivable pattern of 8x8 pixels in normal images..
If you quantize those 4 floats per 8x8 block, is that encoding better than say the old venerable JPG 8x8 DCT + quant?
ttul|2 years ago
Stable Diffusion 1.5 may be quite old now, but it is an incredibly rich model that still yields shockingly good results. SDXL is newer and more high tech, but it’s not a revolutionary improvement. It can be less malleable than the older model and harder to work with to achieve a given desired result.
fpgaminer|2 years ago
That has been my experience as well. It's frustrating because SDXL can be exquisite, but SD 1.5 is more "fun" to work with and more creative. I can throw random ideas into a mish-mash of a prompt and SD 1.5 will output an array of interesting things while SDXL will just seem to fall back to something "reasonable", ignoring anything "weird" in the prompt. SDXL also seems to have a lot more position bias in the prompt. SD 1.5 had a bit of that, paying more attention to words earlier in the prompt, but SDXL takes that to a new level.
But SDXL can draw hands consistently, so ... it's a tough choice.
wruza|2 years ago
My another issue is that sdxl images that you can see on the web always have that “from a movie/ads”-?ish? coating. Can’t explain it, but it feels even more uncanny than 1.5.
SDXL is too resource-hungry for what it produces. 3x+ model sizes, 12GB vram is barely enough for it, 40 steps is the minimum, and I don’t think training loras will turn out feasible at all. I can’t lower the resolution without distortions, and even proportions are hard to deal with. It feels much less flexible than 1.5 in this regard.
I’m sticking with 1.5, no sdxl plans.
3abiton|2 years ago
l33tman|2 years ago
This allows the rest of the network to be smaller while still generating a usable output resolution, so it's a performance "hack".
It's a really good idea to explore it and hack into it like in the article, to "remaster" the image so to speak!
Der_Einzige|2 years ago
mzz80|2 years ago
nomel|2 years ago
Lerc|2 years ago
I'm not entirely convinced by this idea myself. I have seen a few networks where a range of -1..1 inputs do a lot better than inputs in the range of 0..2 even though translation should be an easyish step for the network to figure out. The benefit from preprocessing the inputs, to me seem to be more advantageous than my common sense tells me it should be.
godelski|2 years ago
Edit: ops, forgot the link
https://github.com/ducha-aiki/caffenet-benchmark/blob/master...
Dwedit|2 years ago
Sabinus|2 years ago
Also interesting is how the way sdxl structures latents affects how it thinks about images.
Lerc|2 years ago
01HNNWZ0MV43FF|2 years ago
l33tman|2 years ago
If you quantize those 4 floats per 8x8 block, is that encoding better than say the old venerable JPG 8x8 DCT + quant?
SV_BubbleTime|2 years ago
I for sure thought a discussion about latent spaces would instantly be over my head. It was, but took a few paragraphs.
HanClinto|2 years ago
rgmmm|2 years ago