(no title)
Cushman | 1 year ago
As a general statement, no, vision isn’t as time-sensitive as hearing, so the timing requirements aren’t as precise. But when it comes to head and hand tracking, the brain’s also doing predictive sensor fusion, and even “unnoticeably” small delays can be disorienting or nauseating. (Ocular fixation is the most sensitive, but hand-eye coordination is also pretty important to the brain!)
The important number in VR is “motion-to-photon” latency. Over 20ms starts to be noticeable to most people; 50ms starts to make most people uncomfortable. That’s the total budget for sensor fusion, simulation, rendering, and display, and that’s just for the bare-minimum experience that doesn’t make people immediately ill.
You can do a lot with prediction and late updates in screen space, which is what makes VR possible at all on current hardware— but it’s hard to make up for having sensor data delayed by possibly 150% of the total time budget :)
No comments yet.