For analysis of new AR/VR technology I highly recommend Karl Guttag's blog: https://www.kguttag.com/
He's an engineer that understands the limitations of physics, especially when it comes to optics and light. He is very good a parsing marketing hype vs reality. (He called BS on Magic Leap years before it launched.)
His posts are exceptionally well researched and explained, even if you don't have a background in physics or optics.
He recently did an analysis of the Apple Glass leaks. I expect he'll post his thoughts on this new technology from FB soon.
For someone who claims to be so concerned about display quality, he makes some really, really shitty choices in font colors when annotating images.
I've been reading Karl's takedowns for a few years now and, while there is never anything technically wrong about what he says, it's also just not as important as he thinks it is.
Yes, waveguide optics aren't the best possible visual experience once could have. But does that matter all that much? I think Karl's terrible diagrams point out why he makes this mistake. He doesn't understand that design is the much more important consideration here.
The technical limitations of the display technologies that we have today are not impediments to product development. Good software can be designed to work around these issues. Can't render black? Don't design around dark themes. Have a narrow field of view? Don't require people to try to keep mental track of things around them by vision alone.
Karl looks at the HoloLens, sees the waveguides, and misses all the amazing operating system features. Speech recognition, spatialized audio, a fully spatialized desktop metaphor. These things are important and they go a long way towards the usability of the system.
And that was why the Magic Leap failed. Not because the displays were crap, but because the entire system was crap. It was basically "just a" stock Android system with super flaky WiFi and no systems view on delivering a unified product. The entire product was fundamentally mismanaged. The hardware was slightly better than the first HoloLens, but you were far more limited in making good software for the Magic Leap than you were for the HoloLens.
I second Karl’s blog. Been following the ML journey through his eyes from the beginning and his insights couldn’t have been more accurate. Despite the many criticism Magic Leap Fanboys gave him.
I'd be careful about statements about 'limitations of physics'. Yes, there are actual physical limitations, but we frequently have beliefs about the limitations of physics that are not actually limitations of physics.
I'll say this from a consumer/product point of view and not a technical or engineering, but WOW! Assuming they could pull off a finished, high performance product, (big if) this could be the form factor that bring VR to dominance. The silly looking, awkward to wear headset is a big hindrance to adoption because it absolutely matters if you look silly or cool when it comes to consumer tech.
The fact that these glasses are clearly opaque remind me of Marty's reaction when he sees Doc's brown futuristic glasses in "Back to the Future". It does look a bit silly, but at the same time it looks useful, and not heavy and encumbering like most VR headsets.
I've been curious about something in AR for a while, and I can't seem to find the right terms to query this. Why can't ambient light be used for illumination, and an LCD + polarizer to darken each pixel? If you can approximate the illumination of each point of light going through the LCD (for example, with an outward-facing camera above each lens capturing a full-color image and interpolating it down to where it would be overlayed on top of the LCD) you would know how much you would need to darken each sub-pixel's color to compensate for the light coming in. Then it would be super low-power since there would be no backlight. Also, if you wanted "Transparent" mode you could make it fairly clear. I'm sure there's some reason this is untenable but I'm not really sure what it is - perhaps the inability to be accurate with the compensation?
If you can only subtract light the contrast of what you're viewing against the background will always be poor, almost by definition. Also doesn't handle color.
I don't know too much about optics and VR/AR, but can this technology can be adapted for AR applications (a la Google/Apple Glass)? Personally, I think AR applications are much more interesting than VR (especially in the short term)...
That is what Hololens 2, Magic Leap, and some other products are. These are "cool" but have not set the world on fire in terms of product-market fix.
If you are building an AR system there is always an awkward balance between "letting the environment shine through" and "having projected items be bright enough to be visible". If you put something black in front of the holograms at least now you have just one problem instead of two problems.
Are there are AR/VR technologies which are addressing software developers? I would very much like to replace my shitty monitor with AR/VR glasses for development.
I know this may not seem like "intended use case", but the developer experience can use some innovation for a change. Also one way to bring these technologies close to developers.
Not for coding. The resolution just isn't there yet.
Roughly, per-eye resolution is in the same ballpark as HD displays, but stretched over a 90+ degree field of view. Fonts need to be very large to be legible. You can create a theater sized virtual monitor, but it's just taxing to use. Aliasing artifacts make it worse.
At least for text-focused tasks, I'd take virtually any display built in the past 40 years over a modern VR headset.
My advice is to save your eyes and put the money you’d spend on VR over the next decade and buy a very nice monitor now instead. Every aspect of the experience will be superior, and if you’re a developer the payback time will be relatively short.
If anyone here is in the know with the AR/VR scene, could they shed some light on why AR hasn't taken off yet?
VR is cool, but it's seems like such a more useful concept to me to have actual reality with enhanced information.
Imagine wearing glasses and looking at a plate of food then having it estimate + track calories and macros, or paint GPS direction arrows on surfaces realtime, or put people's names you've met before above their head so you can avoid awkwardly admitting you've forgotten it.
Is it technical limitations, or cost?
---
Edit: Many people replied with really informative answers to this already. I genuinely appreciate your time and insight, thank you :)
It's in progress, currently limited by both hardware and cost.
There used to be a good blog post from Michael Abrash when he was at Valve that also talked about two main issues. Latency, and drawing black effectively.
Latency is critical since low latency is a requirement for things looking real (since humans have fast visual systems), but that's ultimately a hardware problem that should get solved in time.
Drawing black is harder because AR uses ambient light and putting a black line on the screen in front of your face doesn't work for focus.
Probably at first they will be mostly for notifications and interacting with apps in a window in your visual field, getting most of the power from the phone. Things like looking at food for calories and names, etc. will come later when a front facing camera is acceptable and there's existing UI in place.
I think this is probably the next platform after mobile devices, looking at little glass displays is a lot worse than having a UI in your visual field (if it can be done well).
Presently, to do the AR processing to extract features from a space uses either cameras or some kind of LIDAR or time of flight scheme to get a point cloud of locations of objects around you. The data from a single camera that is say ~1-2Mpixels (say 1280x800 RGB camera) at say 30fps is only processable on an SOC or an asic. If you choose to process the raw image locally then you need to have feature detection, extraction, etc algorithms that can run locally on that processor. For AR, a more realistic approach to a true all-in-one solution that sits on your face is to create an ASIC that simply gobbles up raw camera data, H265 compresses it, then sends it over a low latency link to your phone sitting in your pocket. Then when you consider that the chip also needs to drive a display (or 2 displays) of some sort, then you realize you need to also must have a way to receive that video data (potentially H265 video decoder now also needed), a way to display that video stream and do some potential fix-up on that video stream (at least some kind of GPU needed) -- now you realize you need a relatively capable SOC that runs on your AR device on your head. So you need a cable / battery pack etc -- even if your phone in your pocket is still the main thing running the show. So it just means that the combination of needing to use cameras, drive displays, and receive data wirelessly means you have some fixed costs, power needs, and limited set of SOCs to choose from at present. So its do-able, just its early days for the hardware to support it.
> If anyone here is in the know with the AR/VR scene, could they shed some light on why AR hasn't taken off yet?
Because demand is too low, and prior attempts at hyping it up with marketing ended like google glass?
I do work in an engineering consultancy that did a dozen of VR/AR toys over the last 5 years.
Some of quite big brands around use our tech, and engineering, though NDAs, NDAs, NDAs...
There is no magic trick behind any of product on the market. Physics, and optics of AR/VR glasses is very simple, high school level simple. Just too many companies want to make add "smoke, and mirrors" into the optical scheme...
Making AR/VR goggles power efficient, and lightweight enough for daily use is possible even with current day tech. It's not much of a secret now that there is an IP blocker on a critical technology owned by Beijing University that shuts everything down.
Microsoft, Facebook, Apple experimenting with lasers now is all about them trying to work around that blocker.
Lots of reasons:
No big company has pushed it yet.
Battery constraints.
Processing constraints.
Space constraints, no one wants to look like a cyborg.
Software to make it useful is in itself a large undertaking.
Making ar reactive the surrounding requires really good machine perception. Which required good sensors and large batteries and powerful processors.
It's coming but everyone is waiting for the tech to be ready.
It is not taking off in the consumer space because of cost. Hololens 2 is the best headset available today that has a relevant software platform and it isn't even sold to end customers anymore but just to enterprises.
What keeps costs up are the technical limitations. Microsoft and Magicleap invested both more than a billion each to solve the tech challenges, but the truth is that the displays are not good enough. Framerate is too low (compare with VR what would be necessary), FoV still rather limiting, colors and blacks are still to faint for many environment lighting conditions, room scanning is still too inaccurate, battery life too short.
This is not to say there hasn't been great progress. Hololens 2 solves the comfort problem and is a nice step with resolution and FoV. They just messed up with image quality (color banding/rainbows are a big issue).
Lots of opinions here, but it's simpler than most people are saying. The display technology doesn't exist yet. It can't succeed in the mass market until it looks like regular non-dorky eyeglasses and has specs exceeding current VR displays plus transparency and sunlight-level brightness. To a first approximation nothing else matters.
I think the deluge of information would get old really quickly. Two dimensional flashing lights really pales in comparison to the real world. And I think having that be in your face all the time decreases the experience of the real world instead of augmenting it.
Just think of how it feels to look at your phone halfway through a long hike. I always feel it lessens the magic, and introduces low grade anxiety.
The hardware is significantly harder to get right. VR devices work well specifically because they don't care about your environment. AR devices, to be any good, need to have a semantic understanding of your environment. That's very hard to do, especially on the power and compute budgets that mobile devices allow. And an AR device that isn't mobile is a stupid AR device.
The software is significantly harder to get right. It's a lot easier to model a static scene and make some physics-based interactions in it than it is to try to figure out how to make overlays that react to a real environment.
But I think, much more importantly, the people producing most software in the immersive software space just don't care about your immediate environment. They care a lot more about giving you a canned experience. It's hard to find funding for anything that isn't some sort of media consumption. This is true across both VR and AR. In VR it's ok, because there is no environmental context to exploit anyway. But on AR devices, you just end up with a bad VR app: all the hardware limitations of a mobile AR device with none of the differentiating features. So because you don't get software that cares about your environment, you don't get good user experiences on the hardware.
Open up your iPad or HoloLens or Magic Leap app stores and take a survey of apps that are there. How many of them have any understanding of your environment? There are a lot that don't even take into account the "room mesh", the solid surfaces that the device can see--say nothing about what those solid surfaces represent! I'd estimate it's upwards of 50% on AR headsets and maybe 30% on iPad that do absolutely nothing with any surfaces beyond asking you to find a flat space on the floor. That's just crappy VR. As for the ones that attempt to understand what is in your room? Vanishingly few.
You can make pretty good consulting career out of making what is largely just a PowerPoint presentation in 3D: a collection of canned elements that the user can click buttons to get scene transitions and animations, all with a directed narrative that is trying to tell you something. Advertisers want it. Media companies want it. A lot of big-industry companies completely unrelated to media want it just to show off at conferences to "prove" they are "forward looking".
And you'll get a lot of those clients asking--even demanding--you make that as an AR app, especially on iPads. But it sucks. It's just not anything about what's good in immersive experiences. It fits a little better in VR. It still sucks in VR. But it comes around from backwards priorities. These companies start from wanting VR/AR and work backwards to a use-case. And often they lack any sort of experience or even actual interest in immersive design. What they want is to just do a marketing piece. There are very few companies that start with a use case and then find out whether or not VR or AR is the right solution.
But that's where the bread-and-butter money is. And it sucks the air out of the room. It leaves the real, good, immersive experience development to people who are independently wealthy enough to do it on their own, or to hobbyists hacking it together in their spare time.
Short answer (as others have said): It's harder to make AR than VR. There are a couple of reasons, but most importantly it's because of the way optics work. You can read more on my post here, where I've also linked some further resources that go deeper into the issues: https://shafyy.com/post/ar-vr-two-sides-of-the-same-coin/
AR is shit. Garbage technology, useless features. Requires too many coincidental factors. That about sums it up.
VR is better because you can fabricate entire worlds and spaces for any task. Instantly useful. And if you really needed to combine the real world with generated content, you could theoretically do it by just overlaying content onto a video feed in VR.
Impressive, but I doubt even FRL can overcome this design limitations.
Digilens tried to do that since forever, and when you introduce more colors you will face the same kind of problems seen in Hololens 2.
“Finally, holographic lenses can be constructed so that the lens pro- files can be independently controlled for each of the three color primaries, giving more degrees of freedom than refractive designs. When used with the requisite laser illumination, the displays also have a very large color gamut.”
They have full colour working, and apparently well, in the benchtop prototype.
There’s discussion towards the end as to options for implementing full colour in the HMD prototype.
Would you mind elaborating on (or provide a link to) some of the problems you mentioned Hololens 2 having? Also, I've never heard of Digilens; do you have a preferred source for learning more about that device and what its limitations are?
Can you expand on what you mean by "more colors" ? Toward the end of the article they show a multi-color image from the larger benchtop version of the prototype. Do you mean that adding colors doesn't scale down beyond a certain size?
To be completely honest, I would settle for an early monochrome version - I'm sure people can still make some great games/features like this. I think the form factor (and hopefully price) really is so much more important at this point.
"The First" series on Hulu reminded me of this in terms of the glasses technology they use quite frequently but coupled nicely with voice and gestures. I was watching this yesterday then saw this thinking this could be just around the corner. Maybe Apple is doing something similar?
What has a better chance of miniaturization. A full lens screen like this or a projector in something like the hololens? The projector has the benefit of supporting passive pass through AR with a semi-reflective mirror.
[+] [-] rsweeney21|5 years ago|reply
He's an engineer that understands the limitations of physics, especially when it comes to optics and light. He is very good a parsing marketing hype vs reality. (He called BS on Magic Leap years before it launched.)
His posts are exceptionally well researched and explained, even if you don't have a background in physics or optics.
He recently did an analysis of the Apple Glass leaks. I expect he'll post his thoughts on this new technology from FB soon.
[+] [-] moron4hire|5 years ago|reply
I've been reading Karl's takedowns for a few years now and, while there is never anything technically wrong about what he says, it's also just not as important as he thinks it is.
Yes, waveguide optics aren't the best possible visual experience once could have. But does that matter all that much? I think Karl's terrible diagrams point out why he makes this mistake. He doesn't understand that design is the much more important consideration here.
The technical limitations of the display technologies that we have today are not impediments to product development. Good software can be designed to work around these issues. Can't render black? Don't design around dark themes. Have a narrow field of view? Don't require people to try to keep mental track of things around them by vision alone.
Karl looks at the HoloLens, sees the waveguides, and misses all the amazing operating system features. Speech recognition, spatialized audio, a fully spatialized desktop metaphor. These things are important and they go a long way towards the usability of the system.
And that was why the Magic Leap failed. Not because the displays were crap, but because the entire system was crap. It was basically "just a" stock Android system with super flaky WiFi and no systems view on delivering a unified product. The entire product was fundamentally mismanaged. The hardware was slightly better than the first HoloLens, but you were far more limited in making good software for the Magic Leap than you were for the HoloLens.
[+] [-] krm01|5 years ago|reply
Edit
[+] [-] oh_sigh|5 years ago|reply
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] fudged71|5 years ago|reply
[+] [-] dougmwne|5 years ago|reply
[+] [-] Valgrim|5 years ago|reply
[+] [-] jameslk|5 years ago|reply
[+] [-] xwdv|5 years ago|reply
[+] [-] lukevp|5 years ago|reply
[+] [-] ahupp|5 years ago|reply
[+] [-] tmabraham|5 years ago|reply
[+] [-] PaulHoule|5 years ago|reply
If you are building an AR system there is always an awkward balance between "letting the environment shine through" and "having projected items be bright enough to be visible". If you put something black in front of the holograms at least now you have just one problem instead of two problems.
[+] [-] ss248|5 years ago|reply
[+] [-] andybak|5 years ago|reply
[+] [-] praveen9920|5 years ago|reply
I know this may not seem like "intended use case", but the developer experience can use some innovation for a change. Also one way to bring these technologies close to developers.
[+] [-] pdehaan|5 years ago|reply
Roughly, per-eye resolution is in the same ballpark as HD displays, but stretched over a 90+ degree field of view. Fonts need to be very large to be legible. You can create a theater sized virtual monitor, but it's just taxing to use. Aliasing artifacts make it worse.
At least for text-focused tasks, I'd take virtually any display built in the past 40 years over a modern VR headset.
[+] [-] boomer_joe|5 years ago|reply
[0] https://arcan-fe.com/
[+] [-] shafyy|5 years ago|reply
[+] [-] manuisin|5 years ago|reply
[+] [-] hyko|5 years ago|reply
[+] [-] gavinray|5 years ago|reply
VR is cool, but it's seems like such a more useful concept to me to have actual reality with enhanced information.
Imagine wearing glasses and looking at a plate of food then having it estimate + track calories and macros, or paint GPS direction arrows on surfaces realtime, or put people's names you've met before above their head so you can avoid awkwardly admitting you've forgotten it.
Is it technical limitations, or cost?
---
Edit: Many people replied with really informative answers to this already. I genuinely appreciate your time and insight, thank you :)
[+] [-] gonehome|5 years ago|reply
There used to be a good blog post from Michael Abrash when he was at Valve that also talked about two main issues. Latency, and drawing black effectively.
Latency is critical since low latency is a requirement for things looking real (since humans have fast visual systems), but that's ultimately a hardware problem that should get solved in time.
Drawing black is harder because AR uses ambient light and putting a black line on the screen in front of your face doesn't work for focus.
Unfortunately it looks like Valve killed their blog, but the way back machine has it: https://web.archive.org/web/20200503055607/http://blogs.valv...
My bet is that Apple will pull it off Apple watch style with front facing Lidar: https://www.youtube.com/watch?v=r5J_6oMMG7Y
Probably at first they will be mostly for notifications and interacting with apps in a window in your visual field, getting most of the power from the phone. Things like looking at food for calories and names, etc. will come later when a front facing camera is acceptable and there's existing UI in place.
I think this is probably the next platform after mobile devices, looking at little glass displays is a lot worse than having a UI in your visual field (if it can be done well).
[Edit]: A more recent blog post from Abrash on this topic https://www.oculus.com/blog/inventing-the-future/
[+] [-] xt00|5 years ago|reply
[+] [-] baybal2|5 years ago|reply
Because demand is too low, and prior attempts at hyping it up with marketing ended like google glass?
I do work in an engineering consultancy that did a dozen of VR/AR toys over the last 5 years.
Some of quite big brands around use our tech, and engineering, though NDAs, NDAs, NDAs...
There is no magic trick behind any of product on the market. Physics, and optics of AR/VR glasses is very simple, high school level simple. Just too many companies want to make add "smoke, and mirrors" into the optical scheme...
Making AR/VR goggles power efficient, and lightweight enough for daily use is possible even with current day tech. It's not much of a secret now that there is an IP blocker on a critical technology owned by Beijing University that shuts everything down.
Microsoft, Facebook, Apple experimenting with lasers now is all about them trying to work around that blocker.
[+] [-] plutonorm|5 years ago|reply
It's coming but everyone is waiting for the tech to be ready.
[+] [-] oezi|5 years ago|reply
What keeps costs up are the technical limitations. Microsoft and Magicleap invested both more than a billion each to solve the tech challenges, but the truth is that the displays are not good enough. Framerate is too low (compare with VR what would be necessary), FoV still rather limiting, colors and blacks are still to faint for many environment lighting conditions, room scanning is still too inaccurate, battery life too short.
This is not to say there hasn't been great progress. Hololens 2 solves the comfort problem and is a nice step with resolution and FoV. They just messed up with image quality (color banding/rainbows are a big issue).
[+] [-] modeless|5 years ago|reply
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] briefcomment|5 years ago|reply
Just think of how it feels to look at your phone halfway through a long hike. I always feel it lessens the magic, and introduces low grade anxiety.
[+] [-] moron4hire|5 years ago|reply
The hardware is significantly harder to get right. VR devices work well specifically because they don't care about your environment. AR devices, to be any good, need to have a semantic understanding of your environment. That's very hard to do, especially on the power and compute budgets that mobile devices allow. And an AR device that isn't mobile is a stupid AR device.
The software is significantly harder to get right. It's a lot easier to model a static scene and make some physics-based interactions in it than it is to try to figure out how to make overlays that react to a real environment.
But I think, much more importantly, the people producing most software in the immersive software space just don't care about your immediate environment. They care a lot more about giving you a canned experience. It's hard to find funding for anything that isn't some sort of media consumption. This is true across both VR and AR. In VR it's ok, because there is no environmental context to exploit anyway. But on AR devices, you just end up with a bad VR app: all the hardware limitations of a mobile AR device with none of the differentiating features. So because you don't get software that cares about your environment, you don't get good user experiences on the hardware.
Open up your iPad or HoloLens or Magic Leap app stores and take a survey of apps that are there. How many of them have any understanding of your environment? There are a lot that don't even take into account the "room mesh", the solid surfaces that the device can see--say nothing about what those solid surfaces represent! I'd estimate it's upwards of 50% on AR headsets and maybe 30% on iPad that do absolutely nothing with any surfaces beyond asking you to find a flat space on the floor. That's just crappy VR. As for the ones that attempt to understand what is in your room? Vanishingly few.
You can make pretty good consulting career out of making what is largely just a PowerPoint presentation in 3D: a collection of canned elements that the user can click buttons to get scene transitions and animations, all with a directed narrative that is trying to tell you something. Advertisers want it. Media companies want it. A lot of big-industry companies completely unrelated to media want it just to show off at conferences to "prove" they are "forward looking".
And you'll get a lot of those clients asking--even demanding--you make that as an AR app, especially on iPads. But it sucks. It's just not anything about what's good in immersive experiences. It fits a little better in VR. It still sucks in VR. But it comes around from backwards priorities. These companies start from wanting VR/AR and work backwards to a use-case. And often they lack any sort of experience or even actual interest in immersive design. What they want is to just do a marketing piece. There are very few companies that start with a use case and then find out whether or not VR or AR is the right solution.
But that's where the bread-and-butter money is. And it sucks the air out of the room. It leaves the real, good, immersive experience development to people who are independently wealthy enough to do it on their own, or to hobbyists hacking it together in their spare time.
[+] [-] shafyy|5 years ago|reply
[+] [-] Lammy|5 years ago|reply
[+] [-] xwdv|5 years ago|reply
VR is better because you can fabricate entire worlds and spaces for any task. Instantly useful. And if you really needed to combine the real world with generated content, you could theoretically do it by just overlaying content onto a video feed in VR.
[+] [-] stanlarroque|5 years ago|reply
[+] [-] madaxe_again|5 years ago|reply
https://research.fb.com/wp-content/uploads/2020/06/Holograph...
“Finally, holographic lenses can be constructed so that the lens pro- files can be independently controlled for each of the three color primaries, giving more degrees of freedom than refractive designs. When used with the requisite laser illumination, the displays also have a very large color gamut.”
They have full colour working, and apparently well, in the benchtop prototype.
There’s discussion towards the end as to options for implementing full colour in the HMD prototype.
[+] [-] piercebot|5 years ago|reply
Thanks!
[+] [-] mdorazio|5 years ago|reply
[+] [-] unknown|5 years ago|reply
[deleted]
[+] [-] bArray|5 years ago|reply
[+] [-] mojomark|5 years ago|reply
1. https://www.researchgate.net/publication/266659406_Pinlight_...
[+] [-] sebringj|5 years ago|reply
[+] [-] jayd16|5 years ago|reply
I suppose you could incorporate both.
[+] [-] disposekinetics|5 years ago|reply
[+] [-] kmonsen|5 years ago|reply
[+] [-] oliyoung|5 years ago|reply
[+] [-] shafyy|5 years ago|reply
[+] [-] aj7|5 years ago|reply