top | item 34865312

(no title)

synapticpaint | 3 years ago

So, the video was generated by applying ControlNet to the input video frame by frame. Every inference setting is the same for every frame -- seed, prompt, CFG, steps, and sampler. The only thing that changes frame to frame is that the pose changes slightly. So actually, if SD was well behaved, you would expect the difference between adjacent frames to be small, because the change in the input is small. But SD is somewhat schizophrenic so you get this amount of flicker from even small changes in input.

I also had to specify what the outfit should be (I got a lot more discrepancies when I didn't do this from the outfit changing frame to frame). You can see that the outfit changes color in the second version, I bet you can get that to be even more consistent if you specify the color in the prompt too.

If you create a dreambooth model of a character, you can probably also get consistency of the face that way. In this case I didn't need to do this because I didn't care who I got, I just asked for an "average woman".

discuss

order

meghan_rain|3 years ago

Is there like a "temperature" setting you can change? And set it to 0 to produce less flickering?

Lerc|3 years ago

The flickering comes from the fundamental nature of the de-noising mechanism involved in the diffusion model. The ability to create multiple novel images for the same input comes from adding noise with a random seed. Currently this is more or less done every frame which is why you get the flickering. Keeping the same seed wouldn't be helpful if you want the image to move.

What could be of use here is a noise transformation layer that can use the same noise for every frame but transformed to match desired motion. For video conversion you could possibly extract motion vectors from successive frames to warp the noise.

I assume someone is working on this somewhere.

shubb|3 years ago

I wonder if putting an adversary network on top would reduce the flickering. A mechanism that only accepts a frame of it is detected to be three next frame in a video of the same person, otherwise regenerate

synapticpaint|3 years ago

Not really, but I think there are other things you can do to reduce flickering that I'm looking into.

refulgentis|3 years ago

that’s really really really good, I have an overtrained Dreambooth model I was using with controlnet and even mine was flickering in the face more than this

synapticpaint|3 years ago

Are you using canny mode? One of the other modes (HED, segmentation, or depth) may give you more consistency. Lmk how it goes if you try this.