top | item 34476065

(no title)

perfopt | 3 years ago

How does this work? When I run it on a machine with a GPU (pytorch, CUDA etc installed) I still see it downloading files for each prompt. Is the image being generated on the cloud somewhere or on my local machine? Why the downloads?

discuss

order

bryced|3 years ago

Shouldn't be downloads per prompt. Processing happens on your machine. It does download models as needed. A network call per prompt would be a bug.

perfopt|3 years ago

OK. I noticed that the images are not accurate when I give my own descriptions. Not sure if this is a limitation of Stable Diffusion. For example, for the text "cat and mouse samurai fight in a forest, watched by a porcupine" I got a cat and a mouse (with a cat's face and tail!!) in a forest sort of fighting. But no - porcupine

Thank you for creating this.

perfopt|3 years ago

I keep seeing this even when the prompt is unchanged

Downloading https://huggingface.co/runwayml/stable-diffusion-v1-5/resolv... from huggingface Loading model /home/hrishi/.cache/huggingface/hub/models--runwayml--stable-diffusion-v1-5/snapshots/889b629140e71758e1e0006e355c331a5744b4bf/v1-5-pruned-emaonly.ckpt onto cuda backend...

followed by a download

searchableguy|3 years ago

Is there a way to pre-download all models? I want to create a docker image and cache the models.

Also any way configure the generated file path beyond directory or directly pipe image from the CLI?