top | item 45959226

(no title)

Two_hands | 3 months ago

wow, this is great! You can't DM but my email is in my blog post, in the footnotes.

Do you remember the cost of Mech Turk? It was something I wanted to use for EyesOff but never could get around the cost aspect.

I need some time to process everything you said, but the EyesOff model has pretty low accuracy at the moment. I'm sure some of these tidbits of info could help to improve the model, although my data is pretty messy in comparison. I had thought of doing more gaze tracking work for my model, but at long ranges it just breaks down completely (in my experience, happy to stand corrected if you're worked on that too).

Regarding the baby screener, I see how this approach could be very useful. If I get the time, I'll look into it a bit more and see what I can come up with. I'll let you know once I get round to it.

discuss

order

_menelaus|3 months ago

The cost for mech turk:

We paid something like $10 per hour and people loved our tasks. We paid a bit more to make sure our tasks were completed well. The main thing was just making the data collection app as efficient as possible. If you pay twice as much but collect 4x the data in the task, you doubled your efficiency.

Yeah I think its impossible to get good gaze accuracy without observing the device reflection in the eyes. And you will never, ever be able to deal well with edge cases like lighting, hair, glasses, asymmetrical faces, etc. There's just a fundamental information limit you can't overcome. Maybe you could get within 6 inches of accuracy? But mostly it would be learning face pose I assume. Trying to do gaze tracking with a webcam of someone 4 feet away and half offscreen just seems Sisyphean.

Is EyesOff really an important application? I'm not sure many people would want to drain their battery running it. Just a rhetorical question, I don't know.

With the baby autism screener its difficult part is the regulatory aspect. I might have some contacts in Mayo Clinic that would be interested in productizing something like this though, and could be asked about it.

If I were you, I would look at how to take a mobile photo of an iris, and artificially add the reflection of a phone screen to create a synthetic dataset (it won't look like a neat rectangle, more like a blurry fragment of one). Then train a CNN to predict the corners of the added reflection. And after that is solved, try the gaze tracking problem as an algebraic exercise. Like, think of the irises as 2 spherical mirrors. Assume their physical size. If you can locate the reflection of an object of known size in them, you should be able to work out the spatial relationships to figure out where the object being reflected is relative to the mirrors. This is hard, but is 10-100x easier than trying end-to-end gaze tracking with a single model. Also nobody in the world knows to do this, AFAIK.

Two_hands|3 months ago

interesting, $10 per hour is pretty reasonable.

ha, thats probably why I noticed the EyesOff accuracy drops so much at longer ranges, I suppose two models would do better but atm battery drain is a big issue.

I'm not sure if it's important or not, but the app comes from my own problems working in public so I'm happy to continue working on it. I do want to train and deploy an optimised model, something much smaller.

Sounds great, once a POC get's built I'll let you know and can see about the clinical side.

Thanks for the tips! I'll be sure to post something and reach out if I get round to implementing such a model.