It's not really "auto-draw" as much as it's a visual search in which you suggest shapes and it looks across the collection for visually similar icons. Impressive and fun, but not yet a huge advancement over just typing "house" or "cake" to search the image library.
Their description made it sound like a really cool lower-level tool, so obviously it ends up being a letdown.
Drawing/art programs generally have a line smoothing feature - just smoothing your wobbly lines as you draw, using relatively simple algorithms. The description here made me hope for something more "medium-level", half-way between the two. It wouldn't just smooth your lines - it would adjust them according to context, based on a corpus of more precise line drawings, and perhaps predict/suggest the next strokes. It might be difficult to pull off though, if implemented naively it would probably just work against the artist.
In some cases, yes its easier to search, but it does fill a use case. I'm not an illustrator yet often need icons. I tend to have a rough idea of what I want but can struggle to find the right keywords. Visual search here is very useful, and allows an element of play for finding the icon. (Hopefully play, not horrible frustration)
Indeed ... Google docs has had this feature for some time. If you want to insert a special character or symbol, it gives the option to draw it and then shows similar characters.
It took me about an hour, after I got incredibly frustrated that it wouldn't let me draw anything. Can't draw a robot. Can't draw a sad face (only smiley face). Can't even draw a stick figure. Can't draw a speech bubble.
I felt like it was fighting with me for what it wanted to draw, while leaving very basic and fundamental shapes out. There were more things I couldn't draw, I can't even remember everything.
-> There's an undo button, it works well. But there should be a redo button. (Or the Apple-Y or Ctrl-Y keyboard shortcut for redo ought to work.)
-> See how my smiley face is too big on the right? Well I can't make it smaller: even if I zoom way in (there's a zoom functionality) I can't use the select tool to just select the smiley face (inside the jail) to reduce it in size. I'd have to recreate the parts of this image separately.
-> There is no way to set line thickness on the clip art! This should be one of the easiest things to set - but you can only scale the whole image, not the line width. That makes it hard to work with.
Overall I found the experience frustrating.
I have a challenge for you guys though: for the most common hundred thousand or so words, use a machine learning algorithm on your own Image Search results, to try to come up with canonical ideas of what the objects in question might look like, after sorting them into categories based on similarity of recognized features. Then have the algorithm create an outline using the canonical idea it has derived for each category.
What I mean is that if someone Google's "hand" they might get: left hand, right hand, fist, middle finger, OK sign, I mean there really are only so many ways to hold a hand, or visual meanings/memes for the idea of "hand", and other artists already have introduced a canonical version. (Likewise "stick figure" has a meme around it.)
So for each one of those, the algorithm could learn from every version of that that it judges as similar to each other -- and then draw it's own for each one! (Computer algorithms are good at drawing in a learned style, even such as Van Gogh's, etc.)
Other simple examples include a "peace sign". If you Google image search "peace sign" you obviously get a very canonical shape. Why can't a machine learning algorithm draw its own?
This idea of deriving free, creative-commons licensed images (not subject to trademark search of course), by a machine learning algorithm trained on a huge corpus of image results date (in a fair use way), without copying any of them in particular, would be huge.
You have most of the interface to do this. It is a nice next challenge for you - and a very serious one. I suggest you do it!
Whats the process for Google to make this sort of thing? Does some 7 figure exec say we need to make it easier to draw bikes and then Google gets their army of 10x engineers to make this happen?
For stuff like this, it usually starts from the bottom. Engineers have ideas, convince others to help them work on their ideas, build prototypes (alone or with others), sometimes get help from product managers to develop a business plan, then pitch it to senior leadership to get some funding.
It takes a lot of skill, tact, and product acumen to get things out the door -- probably the same set of skills as you would need outside Google. (Except that within, you have access to more and better resources, but the bar is much higher.)
Obviously, this doesn't mean that every idea will stick... a lot of them won't -- some don't make money, some don't provide real value, and some are just terrible ideas. But it's a much better process than just top-down alone.
[Edited because I'm dumb and can't count figures] I've mostly seen this kind of thing happen because some engineer(s) wanted to try an idea, not because it was imposed from above.
It looks like Google persistently feels guilty for getting enormous amounts of money without bringing too much of a value (Ads). So they try to compensate that by giving back. Most of the stuff they offer is honestly crap, but this one (Autodraw) and stuff like GMail are very decent.
This would be great for flowcharts and diagrams. Sketch out a rough diagram on a tablet, and then have the shapes and lines "snap" to crisp versions as soon as they are identified. Even better if I could draw it on a whiteboard, take a photo, upload it, and get a response back as soon as it's done being converted.
There are bunch of apps that do this on ipad and Android... Plus Microsoft's note taking app, Lenovo/IBM's old X-series apps, and I'm sure others. Heck, the Newton did it.
If you're curious, try one of them out. It gets frustrating pretty quick.
Google needs a better way to lifecycle these things. Clearly this project will be cancelled, so rather than just reinforce its reputation for killing its projects, perhaps they need "experimental" projects that might even get spun out of the company. Or something like that.
Google keeps coming up with ways to use machine learning to do autofills, suggestions, etc.
A month ago Allo [0], then that article in Verge about computational photography [1], then cameras without lenses [2] and now this. There is no question that this is all very powerful and awesome, but it also raises some questions, like who is the creator of a photo / drawing? Is every photo / drawing going to look the same in the future?
Here is an illustration of what I am concerned about:
My wife downloaded google "Allo" (Yet another chat app where you can change font size. Innovative, I know.). It also happens to suggest answers so you don't have to type as much.
Here is how it went:
She: Hi!
Me: Hi how r u
She: Where r u
She: Where r u now?
She: At home?
She: Working?
She: I missed u
Me: Working
Me: Missed u too
Me: What u doing?
She: How are u?
Me: Fine thank u
Me: What about u?
Me: What are u doing?
Me: Can i see u?
She: Working
Me: Oh
She: Yes
Me: Where r u from?
Me: Who are u?
And it kept on going for a long long time, none of us actually saying anything real, but both of us learning a lot about what looks like an average socially awkward American teenager conversation.
It had love, beauty, cuteness, gifs, it even made us add some daily love quote bot to our thread, but we never actually typed anything ourselves because it was so easy not to.
Of course we both knew it and thought that it's funny, but I can't shake this weird feeling
that something is very wrong with this and that in the long term we are being brainwashed to be a dumber, more superficial version of ourselves.
I was surprised how poorly it ran on my very modern phone. And then how tiny everything was on my desktop.
When I looked past that and tried to draw a cat, it wasn't all that useful. I mean cool, you saw I was drawing a face and gave me 50 options. But what am I supposed to do with that?
It feels like a rehash of what the Newton would do when you tried to draw stuff. But it does it better. I think if I could skip the "pick what I meant" step, it would be cool for whiteboarding in the office.
That's because your very modern phone has a very puny CPU compared to even the average desktop CPU. I'm surprised about how few people know that their "2 GHz multi-core" phone is 5-10x slower than an average 5 year old desktop on common tasks.
(Edit: hehe, as evidenced by this post being downvoted. The HN audience doesn't know any better either?)
This reminds me of Chinese handwriting input methods, which have almost the exact same UI. You draw a character on the screen, and you get a selection of results at the top.
Since nobody has mentioned this yet: I found that the core search functionality is not very good.
I tried drawing a frowny face, a stick figure person, and a puppy face, and it didn't recognize any of them. I'm terrible at drawing, but I feel these are objects that have a universally-understood outline.
Fun idea but doesn't really work. Just sorta replaces your random doodle with a random piece of clip art. Any trace of your original drawing is gone. Disappointing.
[+] [-] toddmorey|9 years ago|reply
[+] [-] pierrec|9 years ago|reply
Drawing/art programs generally have a line smoothing feature - just smoothing your wobbly lines as you draw, using relatively simple algorithms. The description here made me hope for something more "medium-level", half-way between the two. It wouldn't just smooth your lines - it would adjust them according to context, based on a corpus of more precise line drawings, and perhaps predict/suggest the next strokes. It might be difficult to pull off though, if implemented naively it would probably just work against the artist.
[+] [-] wjdp|9 years ago|reply
[+] [-] chongli|9 years ago|reply
[+] [-] BrandonMarc|9 years ago|reply
[+] [-] brazzledazzle|9 years ago|reply
[+] [-] Animats|9 years ago|reply
This may be the answer for how to enter emoji. There are now over 2600 emoji, with more to come. Keyboard selection isn't working and menus are huge.
[+] [-] colonelxc|9 years ago|reply
https://techcrunch.com/2015/05/28/draw-emoji/
[+] [-] tigershark|9 years ago|reply
[+] [-] RobinL|9 years ago|reply
[+] [-] slig|9 years ago|reply
[+] [-] wyldfire|9 years ago|reply
[1] https://quickdraw.withgoogle.com/
[+] [-] dzdt|9 years ago|reply
[+] [-] thebouv|9 years ago|reply
[+] [-] swift|9 years ago|reply
[+] [-] jacquesm|9 years ago|reply
[+] [-] art1st|9 years ago|reply
http://imgur.com/a/WWff9
It took me about an hour, after I got incredibly frustrated that it wouldn't let me draw anything. Can't draw a robot. Can't draw a sad face (only smiley face). Can't even draw a stick figure. Can't draw a speech bubble.
I felt like it was fighting with me for what it wanted to draw, while leaving very basic and fundamental shapes out. There were more things I couldn't draw, I can't even remember everything.
[+] [-] art1st|9 years ago|reply
-> There's an undo button, it works well. But there should be a redo button. (Or the Apple-Y or Ctrl-Y keyboard shortcut for redo ought to work.)
-> See how my smiley face is too big on the right? Well I can't make it smaller: even if I zoom way in (there's a zoom functionality) I can't use the select tool to just select the smiley face (inside the jail) to reduce it in size. I'd have to recreate the parts of this image separately.
-> There is no way to set line thickness on the clip art! This should be one of the easiest things to set - but you can only scale the whole image, not the line width. That makes it hard to work with.
Overall I found the experience frustrating.
I have a challenge for you guys though: for the most common hundred thousand or so words, use a machine learning algorithm on your own Image Search results, to try to come up with canonical ideas of what the objects in question might look like, after sorting them into categories based on similarity of recognized features. Then have the algorithm create an outline using the canonical idea it has derived for each category.
What I mean is that if someone Google's "hand" they might get: left hand, right hand, fist, middle finger, OK sign, I mean there really are only so many ways to hold a hand, or visual meanings/memes for the idea of "hand", and other artists already have introduced a canonical version. (Likewise "stick figure" has a meme around it.)
So for each one of those, the algorithm could learn from every version of that that it judges as similar to each other -- and then draw it's own for each one! (Computer algorithms are good at drawing in a learned style, even such as Van Gogh's, etc.)
Other simple examples include a "peace sign". If you Google image search "peace sign" you obviously get a very canonical shape. Why can't a machine learning algorithm draw its own?
This idea of deriving free, creative-commons licensed images (not subject to trademark search of course), by a machine learning algorithm trained on a huge corpus of image results date (in a fair use way), without copying any of them in particular, would be huge.
You have most of the interface to do this. It is a nice next challenge for you - and a very serious one. I suggest you do it!
[+] [-] wehadfun|9 years ago|reply
[+] [-] zeroxfe|9 years ago|reply
It takes a lot of skill, tact, and product acumen to get things out the door -- probably the same set of skills as you would need outside Google. (Except that within, you have access to more and better resources, but the bar is much higher.)
Obviously, this doesn't mean that every idea will stick... a lot of them won't -- some don't make money, some don't provide real value, and some are just terrible ideas. But it's a much better process than just top-down alone.
[+] [-] zippergz|9 years ago|reply
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] garganzol|9 years ago|reply
[+] [-] cialowicz|9 years ago|reply
[+] [-] bronson|9 years ago|reply
If you're curious, try one of them out. It gets frustrating pretty quick.
[+] [-] huxley|9 years ago|reply
https://youtu.be/VWSKqgHOEy0?t=5m32s
[+] [-] forgottenacc57|9 years ago|reply
[+] [-] sly010|9 years ago|reply
Here is an illustration of what I am concerned about:
My wife downloaded google "Allo" (Yet another chat app where you can change font size. Innovative, I know.). It also happens to suggest answers so you don't have to type as much.
Here is how it went:
And it kept on going for a long long time, none of us actually saying anything real, but both of us learning a lot about what looks like an average socially awkward American teenager conversation. It had love, beauty, cuteness, gifs, it even made us add some daily love quote bot to our thread, but we never actually typed anything ourselves because it was so easy not to. Of course we both knew it and thought that it's funny, but I can't shake this weird feeling that something is very wrong with this and that in the long term we are being brainwashed to be a dumber, more superficial version of ourselves.p.s. I never use "r u", I find it lazy.
Edit: formatting.[+] [-] Waterluvian|9 years ago|reply
When I looked past that and tried to draw a cat, it wasn't all that useful. I mean cool, you saw I was drawing a face and gave me 50 options. But what am I supposed to do with that?
It feels like a rehash of what the Newton would do when you tried to draw stuff. But it does it better. I think if I could skip the "pick what I meant" step, it would be cool for whiteboarding in the office.
[+] [-] sigmar|9 years ago|reply
What phone and what browser? Ran great in chrome on my mid-range android.
[+] [-] johansch|9 years ago|reply
(Edit: hehe, as evidenced by this post being downvoted. The HN audience doesn't know any better either?)
[+] [-] dvt|9 years ago|reply
Reminds me of a little toy project I made 5 years ago: https://www.youtube.com/watch?v=3WswSywx6TI
[+] [-] tomcam|9 years ago|reply
[+] [-] kashano|9 years ago|reply
[+] [-] dheera|9 years ago|reply
[+] [-] apeace|9 years ago|reply
I tried drawing a frowny face, a stick figure person, and a puppy face, and it didn't recognize any of them. I'm terrible at drawing, but I feel these are objects that have a universally-understood outline.
[+] [-] roywiggins|9 years ago|reply
http://detexify.kirelabs.org/classify.html
[+] [-] chvid|9 years ago|reply
[+] [-] RussianCow|9 years ago|reply
[+] [-] neovive|9 years ago|reply
[+] [-] legohead|9 years ago|reply
[+] [-] bhouston|9 years ago|reply
[+] [-] intoverflow2|9 years ago|reply
Was expecting it to maybe use the data from the other drawing experiment to dream up new creations. Not just search a limited library of glyphs
[+] [-] llimllib|9 years ago|reply
[+] [-] tbabb|9 years ago|reply
http://i.imgur.com/NmpdbT2.png
[+] [-] smashed|9 years ago|reply
Drew a face and it proposes ovens and random jitters as closest matches???
http://imgur.com/a/4mjz0
[+] [-] omurphy27|9 years ago|reply
[+] [-] computerwizard|9 years ago|reply