top | item 45954838

(no title)

Two_hands | 3 months ago

This has been done! It’s the paper I first looked at for this task: https://github.com/rehg-lab/eye-contact-cnn

They create this CNN for exactly this task, autism diagnosis in children. I suppose this model would work for babies too.

Edit: ah I see your point, in the paper they diagnose autism with eye contact, but your point is a task closer to what my model does. It could definelty be augmented for such a task, we’d just need to improve the accuracy. The only issue I see is sourcing training data might be tricky, unless I partner with some institution researching this. If you know of anyone in this field I’d be happy to speak with them.

discuss

order

_menelaus|3 months ago

That's great! What I'm talking about is a bit different though and might be a lot easier to deploy and work on much younger subjects:

Put a tablet in front of a baby. Left half has images of gears and stuff, right half has images of people and faces. Does the baby look at the left or right half of the screen? This is actually pretty indicative of autism and easy to put into a foolproof app.

The linked github is recording a video of an older child's face while they look at a person who is wearing a camera or something, and judging whether or not they make proper eye contact. This is thematically similar but actually really different. Requires an older kid, both for the model and method, and is hard to actually use. Not that useful.

Intervening when still a baby is absolutely critical.

P.S., deciding which half of a tablet a baby is looking is MUCH MUCH easier than gaze tracking. Make the tablet screen bright white around the edges. Turn brightness up. Use off the shelf iris tracking software. Locate the reflection of the iPad in the baby's iris. Is it on the right half or left half of the iris? Adjust for their position in FOV and their face pose a bit and bam that's very accurate. Full, robust gaze tracking is a million times harder, believe me.

Two_hands|3 months ago

Thats a cool idea, thanks for sharing! It's cool to see other uses for a model I built for a completely different task.

Is there any research/papers on this type of autism diagnosis tools for babies?

To your last point, yes I agree. Even the task I setup the model for is relatively easy compared to proper gaze tracking, I just rely on large datasets.

I suppose you could do it in the way you say and then from that gather data to eventually build out another model.

I'll for sure look into this, appreciate the idea sharing!