top | item 8100692

Cascaded Displays: Spatiotemporal Superresolution using Offset Pixel Layers

105 points| chdir | 11 years ago |research.nvidia.com

37 comments

order
[+] chdir|11 years ago|reply
Video showing its capabilities : http://www.youtube.com/watch?v=0XwaARRMbSA
[+] mtkd|11 years ago|reply
Very useful thanks.

In 10 years the before example in a before/after vid will still look like it does in that video - just as it did 10 years ago.

The before example shots never seem to evolve - which makes me cynical about display technology demos I don't see in person.

[+] yourad_io|11 years ago|reply
Volume is really low (in my case at least) but the automatic captioning is a home run in this one.
[+] oceanofsolaris|11 years ago|reply
Very interesting approach.

I think one interesting aspect of this is that it couples spatial as well as temporal interpolation. This means that you get a higher resolution as well as a higher framerate, but on the downside seems to introduce additional artifacts depending on how these two interpolations interact.

I have not yet read the technical paper and only watched the video without sound, but from this video it seems that moving sharp edges introduce additional artifacts (can be seen when looking at the features of the houses in peripheral vision at 5:11 in the video). This is what you would roughly expect to happen if both pixel grids try to display a sharp edge, but due to their staggered update, one of these two edges is always at a wrong position.

This problem could probably somewhat alleviated through an algorithm that has some knowledge about the next frames, but this would introduce additional lag (bad for interactive content, horrible for virtual reality, not so bad for video).

I intend to read the paper later, but can anyone who already read it comment on whether they already need knowledge about the next frame or half-frame for the shown examples?

[+] lloeki|11 years ago|reply
> on the downside seems to introduce additional artefacts

It definitely introduces a form of ghosting visible near the rear end of the motorcycle.

As for lag, I can already see John Carmack cringing! There may be an interesting effect though, in that the increase in apparent resolution is quadratic when the increase in computation is linear. Hardware-wise this possibly could be done straight in the double-buffering phase without additional lag if it can be made to race the beam.

[+] birger|11 years ago|reply
If I understand correctly the idea is that you get a high-resolution display by putting two low-resolution displays in fromt of each other?
[+] thrownaway2424|11 years ago|reply
This is not the first time that someone has stacked up two displays to get better output. About five years ago there was a paper about using a DLP projector to backlight an LCD display, yielding high dynamic range. Can't find the paper right now, can only find this poster http://www.cis.rit.edu/jaf/publications/2009/ferwerda09_vss_...

Like the LCoS hack at the end of this video, the DLP backlight suffered from registration artifacts and other crazy limitations. It's still a nifty idea, though.

[+] aroman|11 years ago|reply
Yeah, this is what I'm wondering about as well. What does the actual implementation of this look like? Is it just one display being fed 2 low-resolution image streams? And is there any effort required to synthesize the cascaded image?
[+] blencdr|11 years ago|reply
I have difficulties understanding the mecanism of this supersampling (2 succesive images to make one ?). Can anyone explain this in a simple way ?
[+] ygra|11 years ago|reply
They have two layers, slightly offset (by half a pixel in both directions) on which they show different images which, together, combine to one of higher resolution. They also can show different frames shortly after another quickly enough so that they appear to belong to the same image, but each contributing different parts to either temporal or spatial resolution of the final image.

Since they're using off-the-shelf LCD displays for their prototype, I guess the final result is not yet flicker-free (they probably cannot show more than 60 fps, and thus not more than 15–30 high-resolution frames per second). Also evident as they're demonstrating the capabilities with 5- and 10-fps video. But that's just a matter of a higher refresh rate for the displays, I guess, unless computing the individual frames is too taxing for now (it doesn't seem to be, they do plenty of work in shaders, being NVidia and all).

Major benefits seem to be cost, simplicity and size; their prototypes were built as a head-mounted display and a small projector.

[+] snarfy|11 years ago|reply
Take a piece of graph paper, and then put another on top, offset in the X and Y half a square size. You can still see the lines underneath, making it look like the grid has double the number of squares.

The rest is math.

[+] p1mrx|11 years ago|reply
The main part of an LCD is transparent. It looks like they've stacked one in front of the other, with a half-pixel offset, and arranged the polarizers in such a way that they perform a multiply operation.

So, two panels can produce 4X the resolution, using only static images. But I'm guessing they'd have to sacrifice some bits in the luminosity domain to make it work.

[+] npinguy|11 years ago|reply
I would really like to see some data on the memory savings using this technique. How significant are they?
[+] druidsbane|11 years ago|reply
I would guess 0. My understanding is that you are rendering at the full higher resolution then simply computing the proper subpixels on the offset displays to align them right. You still need all the data there using the full amount of memory otherwise you can't really perform the calculations necessary for the subpixel/temporal interpolation.
[+] higherpurpose|11 years ago|reply
Unfortunately this will be yet another proprietary technology from Nvidia that nobody else will use - which means it won't have mass adoption - which means it's ultimately pointless (unless someone else creates an open source version of it).
[+] exDM69|11 years ago|reply
> Unfortunately this will be yet another proprietary technology from Nvidia that nobody else will use...

This is a scientific/technical research paper for a computer graphics conference. It's not even near being a technology that ships.

There's a reason that there are so many "Nvidia only" technologies. Take the G-sync displays for example. It's a problem dating back to cathode ray tube display technology but to overcome it, it takes integration between the display controller hardware (in the graphics card) and the panel control electronics. Display manufacturers do not make the GPU hardware so the only option is for a GPU company to try to make the first step.

In the long run some of these technologies will become standard and widespread but someone has to take the first step and that must be economically viable.

[+] wtracy|11 years ago|reply
NVidia doesn't make displays, so unless you're expecting them to move into that market, your comment doesn't make sense. (Doubly so because this is hardware, not software.)

I expect NVidia to license this to monitor manufacturers to drive up demand for 4k-capable video cards.

[+] ksec|11 years ago|reply
What is the real use case for this? Gaming and VR?

we have no problem making 4K Screens and Hardware isn't bound by it either.

[+] Ashwinning|11 years ago|reply
Well, display hardware isn't the problem. 4K, 8K, there's no end to it. Cascaded displays using multiplied layers seem to help achieve benefits like sharpness at super high resolutions & effectively smoother results at low frames per second (staggered) video playback. This helps remove a major obstacle that high-res display technologies will face in the short term, which is processing power. Presently, high-end graphics cards can barely crank out 30 FPS at 4K resolutions for games. Also, any compression artifacts etc. in textures are much more pronounced on high-res/big sized displays. While requiring the need to change the workflow (of game development) a little bit, cascaded displays can potentially help render higher resolution, better quality/sharper images at lower frame-rates (i.e. much more cheaply) while still providing that 60fps feel.

Personally, if this takes off, I can see it saving the XBox One's ass, as a lot of the complaints from gamers have been regarding it's inferior capabilities for rendering high-end games (It renders many games at 720p 30 frames/second, while Playstation 4 is able to crank out 1080p for the same titles), and also, play another factor in prolonging the shelf life of the present generation of consoles, by enabling them to deliver much better graphics with the same hardware. Kind of like what Normal Maps (among other things) did for Xbox 360 & PS3, you can see the difference in graphics between a game released in 2005 vs a game released in 2013 on the same hardware. Among a lot of other factors, that was why it took 7 years before we saw the next generation of consoles being released. Comparatively, the Xbox 360 came out within 4 years of the release of the original Xbox.

TL;DR - It's not about the display hardware itself, it's about the ease of rendering graphics to meet the demands of high-end display.

[+] corysama|11 years ago|reply
It's very well suited for VR. VR really needs small displays with resolution that is simply not economical to manufacture. An 8K tablet LCD would be crazy expensive. But, 2 4K LCDs only cost twice as much as 1.

As a bonus, VR really wants crazy fast refresh rates as well.

[+] higherpurpose|11 years ago|reply
4k screens are expensive to make. If the 4k burden would be taken away from the OEMs and just put on the GPU makers (which would have to deal with the performance drawbacks of 4k displays anyway), then TV manufacturers could start selling 4k TVs instead of 1080p ones by next year, and at the same prices (well, they will probably make them a bit more expensive to take some extra profit, but the point stands). Same with tablet makers, monitor makers and so on.