There is a Swedish company called Tobii that does something similar yet a lot more advanced, including lasers and what not to track eye-movements.
One of their ideas is that people with physical handicaps could use their eyes instead of mouse/trackpad to point and click at stuff, but if this could be done with a webcam that's even more better. If you could click on you IPhone only using your eyes, wouldn't that be cool?
However, it's quite improbable you'll get an interesting resolution for your movements with a webcam, compared with lasers or other methods. Image-based eye tracking needs to be real-time (so probably inherently low resolution these days), tracks objects that are an important part of the captured image (so you get like a much lower resolution mouse, because of the size of your eyes or pupils compared to the webcam resolution), and needs to compensate for errors so it needs a lot of redundancy in data to generate output (so decreasing the resolution even more).
Anyone working in computer vision can tell me if I'm right or wrong? I've been working in the area in some small projects, but they were extremely related to this eye-tracking stuff.
Heat maps could also be utilized to provide a computer system a probabilistic notion of the "current object" that the user is referring to. If a computer system could respond to where the user is looking, this could augment verbal input greatly. The computer might actually stand a chance of knowing what the user means when she says, "I want you to look up that."
I can't seem to get this thing to figure out where I'm actually looking. Isn't that the point of "eye-tracking"? It is pretty good at figuring out where my eyes are located on my face, but that isn't very valuable information.
Is not it too much stressful to use eyes as an input device? I don't think there general uses for people that can use a mouse. I think that the next revolution in input devices can be a keyboard that is actually like the iphone a display + touchscreen so that the keyboard will drastically change in order to be application-specific.
[+] [-] jonas_b|17 years ago|reply
One of their ideas is that people with physical handicaps could use their eyes instead of mouse/trackpad to point and click at stuff, but if this could be done with a webcam that's even more better. If you could click on you IPhone only using your eyes, wouldn't that be cool?
[+] [-] hhm|17 years ago|reply
Anyone working in computer vision can tell me if I'm right or wrong? I've been working in the area in some small projects, but they were extremely related to this eye-tracking stuff.
[+] [-] IsaacSchlueter|17 years ago|reply
[+] [-] DenisM|17 years ago|reply
[+] [-] jyothi|17 years ago|reply
1. Usability tests (including heat maps and replays for manual observations)
2. Measuring effectiveness of display ads (web, TV etc) more concretely.
3. Accurate Strabismus (Squint) recognition and measurement.
[+] [-] unknown|17 years ago|reply
[deleted]
[+] [-] stcredzero|17 years ago|reply
[+] [-] tlrobinson|17 years ago|reply
Now we just need to hack together a Johnny Lee style 3D demo like the one in the faceAPI video...
[+] [-] asif|17 years ago|reply
Has anyone else had any success?
[+] [-] antirez|17 years ago|reply