top | item 11258851

(no title)

nuclai | 10 years ago

Thanks for clarifying, I'll update the README. The research paper does a better job of explaining this with its figures!

The algorithm can only reuse combinations of patterns that it knows about, it can do extrapolation but it often ends up being just like a blend. However, you can give it multiple images and it'd borrow the best features from either—for example drawing from all of Monet's work. (Needs more optimization for this to work though, takes a lot of time and memory.)

As for the images, as long as the type of scene is roughly the same it'll work fine. The fact it can copy things "semantically" by understanding the content of the image makes it work much more reliably—at the cost of extra annotations from somewhere. With the original Deep Style Network it's very fragile to input conditions, and composition needs to match very well for it to work (or you pick an abstract style). That was part of the motivation for researching this over the past months.

discuss

order

bd|10 years ago

So if I understood well, this GIF shows you - human being - exploring possibilities / limitations of your method, hand tweaking it for one particular image?

http://nucl.ai/files/2016/03/MonetPainting.gif

That is, the final image, the one that looks the best, is the result of you doing tweaks to doodles to get something that neural net can then fill-in convincingly?

Or are these a different runs of the same method based on the same inputs, that have some natural variability, and you selected the one that looked the best?

Or are these progression steps in one run of the automated algorithm?

Language in the blog post is kinda ambiguous, not sure which steps were done by algorithm and which by a human being.

nuclai|10 years ago

Exactly, the doodling is done by humans and the machine paints the HD images based on Renoir's original. I've edited the blog post to clarify.