top | item 15399132

Teachable Machine: Teach a machine using your camera, live in the browser

438 points| jozydapozy | 8 years ago |blog.google | reply

90 comments

order
[+] nsthorat|8 years ago|reply
deeplearn.js author here...

We do not send any webcam / audio data back to a server, all of the computation is totally client side. The storage API requests are just downloading weights of a pretrained model.

We're thinking about releasing a blog post explaining the technical details of this project, would people be interested?

[+] amelius|8 years ago|reply
Yes please! :)

And some quick questions:

What network topology do you use, and on what model is it based (e.g. "inception")?

What kind of data have you used to pretrain the model?

[+] Splines|8 years ago|reply
There's something fantastically entertaining about this. It's stupidly simple (from the outside) but interacting with the computer in such a different way is weirdly fun.

It's like when you turn on a camera and people can see themselves on a TV. A lot of people can't help but make faces at it.

[+] sydd|8 years ago|reply
Why does it not work in Edge? Please keep the web open, do not make stuff that does not work in a modern browser. Also always give an option to try it anyway.
[+] haser_au|8 years ago|reply
A blog post on the technical details would be great, please. Thanks in advance, since I know it'll take a bit of your time to write.
[+] godelmachine|8 years ago|reply
To answer the question at last, yes, I am interested.
[+] celim307|8 years ago|reply
Pretty neat! Good overview without overwhelming right off the bat. Would be cool if they showed off common pitfalls like over fitting, or even segued into general statistics!
[+] melling|8 years ago|reply
How long before I can teach my computer gestures that are mapped to real computer functions? For example, scroll up/down, switch apps, save document, cut/copy/paste, etc.

One could probably map each gesture to a regular USB device that acts as a second keyboard and mouse? The hard part is identifying enough unique gestures?

[+] amelius|8 years ago|reply
I want to teach my computer to recognize when I'm slouching, so I can correct my posture.
[+] amelius|8 years ago|reply
I don't have a camera here. Did anyone try it? How does it work?
[+] IanCal|8 years ago|reply
Surprisingly well!

It's a really well put together demo & tutorial.

I held a pen up next to me and held the green button.

Then did the same with a mouse.

It would flick between the two if I was holding nothing, so I held the orange button for a bit while holding nothing.

Worked pretty much every time.

Training is fast enough with a few hundred images per class that I didn't notice any delay.

[+] makmanalp|8 years ago|reply
It's working great because they're using a state of the art model (SqueezeNet https://github.com/DeepScale/SqueezeNet) and also the samples / experiments you do are often only on yourself, in the same lighting, same clothes, etc. So it gives a nice idealized playground environment that mostly eliminates annoying details like this.
[+] gfredtech|8 years ago|reply
there are 3 default classes, so you train according to each class(e.g. hand waving, sitting still, etc) you take examples of each(using your camera). you map the input data from your camera to some output data(e.g. if i used the green button to take photos of me waving), display a GIF of a cat that's waving. instead of a GIF you can use sound too
[+] crypticlizard|8 years ago|reply
The value-add for this demo is amazing, it's going to be many people's first approachable experience of ml, or things just like this will be. I expect a lot more of this stuff to appear in UI/UX. It's fun, intuitive, and a game changer away from dumb screens to fully interactive machines with their own knowledge graph.
[+] lelima|8 years ago|reply
You can solve problems using Machine learning without coding from a while ago.. azure machine learning have this features from more than a year ago.

I've solve regression, classification and recommendation problems with it and the best part is it deploys an web service with a few clicks.

[+] thanksgiving|8 years ago|reply
But you would need to have:

1 a working phone

2 a valid credit card

To use azure which places a too high bar on students. I mean I've tried to argue for graduated restrictions so basically students with .edu emails should be able to do some things without entering a credit card number but the fact that it is not possible suggests this isn't a priority for azure.

Google says this finds on your browser so there's little infrastructure cost for this demo, right?

[+] shostack|8 years ago|reply
Can you clarify on what you did with it? I'd love to start dabbling in solving problems with ML, but am a bit intimidated by getting started. Is it fairly easy for a novice to do the things you did?
[+] StavrosK|8 years ago|reply
Does anyone know what this uses under the hood? I loved the demo, but I would like a similarly easy way to get started locally with Python, for example.

Is there an ML library that can easily start capturing images from the webcam so you can play around with training a model?

[+] greggman|8 years ago|reply
be aware, at least in Chrome, once you give teachablemachine.withgoogle.com permission to use you camera, unless you revoke that permission is has permission to use your camera without further permission including from iframes. In other words every ad from and analytics from Google could start injecting camera access.

I wish chrome would give the option to only give permission "this time" and I wish it didn't allow camera access from cross domain iframes.

[+] ma2rten|8 years ago|reply
Are you serious? Do you realize that Chrome is also written by Google and they could theoretically already run arbitrary code on your computer? The potential reputation damage and legal risk for Google would be way too high pull off something like that.
[+] jamesmishra|8 years ago|reply
If this happened, the Google Chrome tab would show a camera. Many webcams have adjacent LEDs that identify that they are activated.

Google could theoretically release compromised versions of Google Chrome and only use the permission on devices where webcam LEDs are unlikely (e.g. smartphones), but this is going deep into tin-foil-hat territory.

[+] azinman2|8 years ago|reply
But won’t it be on just that FQDN alone? Google analytics and ads are served from a totally different domain. What’s the actual concern here?
[+] haser_au|8 years ago|reply
Chrome does give you this option. It's called "incognito mode"
[+] addedlovely|8 years ago|reply
Good to know, but thankfully easy to remove permissions from the settings.
[+] netcraft|8 years ago|reply
What makes it non-mobile? Is it something about the expected performance of the JS? or are there apis being used im not thinking about?
[+] nsthorat|8 years ago|reply
It works on mobile, it's just slow. Every time we read and write from memory we have to pack and unpack 32 bit floats as 4 bytes without bit shifting operators >.>
[+] f00_|8 years ago|reply
this is really cool, openframeworks-esque in browser javascript

if you like this I would highly recommend looking at openframeworks.

the interactive browser part excites me want to try to make something with deeplearn.js

[+] mschuster91|8 years ago|reply
Hmm. I wonder if one could train this with dick pics and embed into popular messenger apps client-side... "this picture was classified as a penis", to counter morons sending their dick as first message.
[+] peepopeep|8 years ago|reply
Am I the only paranoid one who thinks this is just Google's way of capturing millions of faces in their database? Or did Apple beat them to it?
[+] moduspol|8 years ago|reply
Claims like these make privacy-focused efforts less valuable, and I wish people wouldn't make them.

What value is there in taking care to store biometric data only locally, in a separate chip inaccessible even to the OS, if people will simply claim it's equivalent to keeping a remote database of millions of faces?

[+] xyrnoble|8 years ago|reply
Facebook beat them to it... that's the whole reason for tagged images imo. Then they can relate identities with each other and with exif gps data to track their movements over time.
[+] ma2rten|8 years ago|reply
I am pretty sure that Apple does not save your image data in any database. Apple is really trying to differentiate itself on privacy.

Also, I don't think that this sends any data to Google, since it trains the neural net in the browser. You could even verify this yourself by looking at the source code.

[+] glass_of_water|8 years ago|reply
The machine learning is done in the browser with deeplearn.js, so the images aren't being sent to Google's servers.
[+] jamesmishra|8 years ago|reply
The faces are unlabeled, and I'm not sure what that data would be good for. If Google really wanted face data, they could look at:

- Gmail / Google Plus / Google Apps profile pictures

- Google Street View

- Google Hangouts

- implementing a primitive Face ID or Snapchat-style camera on Google Android

- the large mass of face pictures that they index with Google Images

[+] danso|8 years ago|reply
What did Apple beat them to? FaceID is said to not upload data off-device.
[+] icc97|8 years ago|reply
It's good to be paranoid about it but at the same time it's quite a cool thing to offer people.

Also I think a lot of the processing is done in the browser using deeplearning.js, so I don't know how much is sent back to Google.

[+] 4684499|8 years ago|reply
Don't need to, they've got Youtube or so. People has been providing free data set to Google for years anyway.
[+] fancyfacebook|8 years ago|reply
Don't worry some comment on a forum said they'd never do this, so I think we're all good!
[+] eggie5|8 years ago|reply
i bet it's fine tuning an ImageNet CNN