(no title)
Killakwinn | 4 years ago
1) Currently a lot of other services ask people to take "x" number of footsteps away from the screen to approximate "y" feet from the monitor. Using this context, I'd argue that the variation in arm length isn't as dramatic as variation in foot size. Ultimately though, when we're using near vision as a proxy for distance vision, the natural variation in arm length isn't crucial. But! Once we roll out our distance vision check, we won't be relying on arm length.
2) Will leave this to Kristine.
3) Interesting. Hadn't thought about this one. My guess is no because the most important ratio is optotype size:testing distance. (Optotype = the numbers/letters on the screen that a patient is reading)
4) It's possible and we'll need to pressure test this against gold standard in-person maneuvers.
5) Same as #4. Also this is a particularly interesting point because a similar problem exists in person. As an extreme example, I've had patients come in who've memorized the letters in the 20/20 line because they were very motivated by one thing or another (e.g getting their driver's licenses renewed.)
6) Is this the "which is better, 1-or-2" question? All I'll say is that there are a number of interesting ways we could try to simulate these.
Hope this answers some of the q's! Thank you for all the thought that went into them.
schoen|4 years ago
It's interesting to think of the adversarial element in #5 where vision test results are used to qualify for something. In this case a completely unsupervised test is really easy to cheat on -- people can just lean in close to the monitor! If you're not giving people something that they can use to receive a benefit like a job or a license, that incentive to cheat seems weaker, but maybe people will present their fresh prescriptions (!) as purported proof that they have very acute vision.
I was thinking more about psychological aspects where people might not want to admit that they have certain vision problems, so they might feel an incentive to convince themselves that they saw the correct thing. The order and context of presentation might affect how easy it is for people to convince themselves of that. I know I've taken similar tests in person at the optometrist (like looking at a grid to see if any portions appear distorted), but I don't remember exactly how the optometrist asked me to confirm what I'd seen.
This may be an underappreciated soft skill on the part of medical professionals -- getting people to tell the truth about their perceptions in diagnostic tests, or noticing when people may be dishonest or simply uncertain. So that may be pervasively tricky for you to address, at least with a small percentage of patients: if they want to think of themselves as having good vision, they may consciously or unconsciously fudge the results a bit so the assessment comes back better.
Killakwinn|4 years ago
The psychology of how people relate to their vision -- especially the independence that good vision affords -- is very complex and certainly something I wish that our training spent more time emphasizing. There are patients who come into clinic with relatively minor and non-vision threatening problems who are afraid of imminently going blind, and there are patients on the other end of the spectrum who are imminently going to go blind but are in denial about it (or are not terribly bothered by the possibility.) Handling these scenarios and all the gray spaces inbetween is one of the more challenging parts of delivering eyecare (and healthcare in general.)
Ultimately, we're aiming for clinical accuracy and scalability first, with an understanding that there are lots of underlying incentives and potential roadblocks that we will tackle head on when the time is right.