top | item 27070838

Anime2Sketch: A sketch extractor for illustration, anime art, manga

273 points| lnyan | 4 years ago |github.com | reply

54 comments

order
[+] CobrastanJorji|4 years ago|reply
I think I'm missing something. I get the Sketch-to-Photo synthesis that this is based on. It's really weird, neat stuff. But as a layman, I'm having trouble seeing the difference between the result of this anime-to-sketch synthesis and what I'd expect to get out of a simple edge detection. Is the difference that it's more clever about which details to ignore?
[+] fsloth|4 years ago|reply
I only dabble in graphics - but generallly simple edge detection needs really uniform tonality and no textures in the input to work well. Look for example in the more "sketchy" examples how the linework that "looks right" is extracted from a quite noisy input. Also, in the top example where there are houses, the contrast difference that gets extracted to linework is lower than in the character areas.

So, the flat shaded images with explicit black outlines - yes, it's likely there is much difference with edge detection. But when the image has lots of different contrasts and tonalities this looks much more impressive.

[+] greatgoat420|4 years ago|reply
I was actually going to ask if someone had done a comparison with edge detection.
[+] wj|4 years ago|reply
This looks like a great tool to generate some Pokémon and Beyblade coloring pages for my kids. We went through everything in Google image results many moons ago.
[+] swsieber|4 years ago|reply
I really want to see how this performs on Octonauts stills
[+] fireattack|4 years ago|reply
Just a heads up, you should use higher quality (or better, just use PNG) for the output.

The default Image.save quality is very low to a point that the JPEG artifact is more prominent than the line art themselves.

L91 @ data.py: image_pil.save(image_path, format='PNG')

[+] forgotpwd16|4 years ago|reply
Then someone can use https://github.com/taivu1998/GANime to recolor them.
[+] slazaro|4 years ago|reply
Do it iteratively one after the other to see if after a while the results are unrecognizable from the originals. Like those experiments that translated a text between languages to create gibberish.
[+] gibolt|4 years ago|reply
This feels like a tool with lots of business cases.

Studios may be able to accelerate digitalization and colorization.

The ability to convert stills to a fillable outline or repurchase for labels/marketing/branded coloring books (or apps) could be worth some money to those with a large content library.

[+] Cloudef|4 years ago|reply
Looks more like shaded art to unshaded lineart rather than sketch. Sketches are usually way more messy, like a blueprint for the final product.
[+] dagmx|4 years ago|reply
This is actually pretty impressive, and I can see it being really useful if it can generate clean line art from Animation roughs.

It would be really interesting to see this in OpenToonz or the like.

[+] throw_m239339|4 years ago|reply
Interesting. I wonder how it fares with 3D renderings? I'm a Blender user and unfortunately, Blender "Toon Shading" capabilities are not very good compared to say Cinema 4D.
[+] pjgalbraith|4 years ago|reply
I wonder if this can be used for comic book inking. It looks like they have an example of that.

Typically the workflow is pencil drawing -> cleaned up ink drawing (japanese animation uses a similar process too). If this can speed up that process it could save a lot of time.

[+] ekianjo|4 years ago|reply
Does not work as well as advertised :) I think the author clearly cherry picked their examples.
[+] mkesper|4 years ago|reply
Can you provide some counter-examples, probably as issues in the repo?
[+] jakearmitage|4 years ago|reply
Does anyone know a similar model that transforms normal images into Western Comic Book style? I've seen it a lot for Anime/Manga, but never for that classic style of 90's comic books.
[+] ZephyrBlu|4 years ago|reply
I'm not super familiar with Deep Learning, but based on the fact this is effectively extracting edges and the ConvTranspose2d layers I'm guessing it's some sort of Convolutional Neural Net?
[+] androng|4 years ago|reply
What could we use this for? The immediate thing that comes to mind is making a coloring book. I’m wondering if I could use it to make something original
[+] zakki|4 years ago|reply
If I want to use this program do I have to have a good GPU in my computer to run this program or I just need to install the required software?
[+] lostgame|4 years ago|reply
I believe that this will not be too GPU-intensive, but that will of course depend on the input resolution of the video.
[+] knicholes|4 years ago|reply
The training is what requires a good GPU. For inference, a CPU should be fine.
[+] offtop5|4 years ago|reply
Is there any way for someone to post a Google collab notebook with this.

I think this would be pretty cool if it would support any picture or video.