(no title)
contravert | 2 years ago
Midjourney definitely generates really high quality art based on simple prompts, but the inability to really customize the output basically kills its utility.
We heavily use Stable Diffusion with specific models and ControlNet to get customizable and consistent results. Our artists also need to extensively tweak and post-process the output, and re-run it again in Stable Diffusion.
This entire workflow is definitely beyond a Discord-based interface to say the least.
jononor|2 years ago
pjgalbraith|2 years ago
https://twitter.com/P_Galbraith/status/1649317290926825473?c...
This is using Stable Diffusion and the Control Net Lineart Model. The coloured version is pretty rough but it was a quick test.
In my opinion Stable Diffusion is vastly superior to Midjourney if you have the skill to provide input to img2img/ControlNet.
I have some other earlier workflow experiments on Youtube if you're interested in this kind of thing https://www.youtube.com/pjgalbraith
netdur|2 years ago
jamilton|2 years ago
jelling|2 years ago
tomng|2 years ago
Excited by the potential for artists to accelerate their creative process with advances like ControlNet, but makes sense there's a lot frustrating about the process today.
Exploring tools that might help. If you might be interested in chatting, email me at tom@adobe.com. Thanks!
exodust|2 years ago
I'd be interested to know where the art ends up in the game? Do you mean 2D backgrounds and billboards in-game? Or are we talking cut-scenes and menu screen art?