top | item 39118963

AI-Powered Nvidia RTX Video HDR Transforms Standard Video into HDR Video

108 points| Audiophilip | 2 years ago |blogs.nvidia.com

115 comments

order

zamadatix|2 years ago

A bit of a let down that the video demoing SDR->HDR conversion is itself only published in SDR. Makes as much sense as demoing a colorization tool in a grayscale video!

sharperguy|2 years ago

At this point, with any new model I think it makes sense to wait until you can run the model on your own input before making any assumptions based on cherry picked examples.

mysteria|2 years ago

If they were serious about showing this tech off they should've provided a video file download. Also indicate that it's a HDR file and should only be viewed on a HDR display. Youtube is just making this look bad as people won't see a difference.

CamperBob2|2 years ago

YouTube tends to post a downscaled SD version first, then they encode and post the higher-res versions when they get around to it. This can take days in some cases. Meanwhile the creator catches the flak...

kevingadd|2 years ago

HDR video playback in the browser is pretty unreliable unless you're on a Mac.

Sparkyte|2 years ago

I am frequently disappointed by such videos.

rado|2 years ago

Ridiculous. Like when James Cameron promoted Avatar HDR with an SDR YouTube video, while YT is perfectly capable of HDR playback.

kelseyfrog|2 years ago

I guess. There's a lot of details we don't know that would change the calculus on this.

To use a analogous workflow, it could be like saying, "It's pointless to shoot video in 10-bit log if it's going to be displayed on Rec.709 at 8-bits." It completely leaves out available transforms and manipulations in HDR that do have a noticeable impact even when SDR is the target.

Again, we can't know if it's important given the information that's available, but we can't know if it's pointless either.

skottenborg|2 years ago

I could see a future where this works really well. It doesn't seem to be the case right now though.

The "super resolution" showcased in the video seemed almost identical to adjusting the "sharpness" in any basic photo editing software. That is to say, perceived sharpness goes up, but actual conveyed details stays identical.

brucethemoose2|2 years ago

Note that YouTube is really bad for these demos due to the re-compression, even in zoomed in stills.

moondev|2 years ago

Whatever special sauce the Nvidia shield uses is honestly incredible. Real time upscaling of any stream, and not just optimized for low res source, its like a force multiplier on content that is already HD. Supposedly the windows drivers do it as well but the effect seems less noticeable to me in my tests

aantix|2 years ago

I'm curious - what's the best open-source video upscaling library out there?

I looked back about a year ago, and it didn't seem like there were any good open-source solutions.

adzm|2 years ago

Topaz is light years ahead of any open source solution unfortunately.

cf100clunk|2 years ago

An HN search of ''Deep Space Nine'' and ''Topaz'' will show some great discussions here covering the dearth of such upscaling solutions, as well as some huge efforts before commonplace AI.

justinclift|2 years ago

It's not exactly what you're after, as it's anime specific and you need to process the video yourself (eg disassemble to frames, run the upscaler, then assemble back to a movie file), but Real-ESRGAN is very good for cleaning up old, low resolution anime:

https://github.com/xinntao/Real-ESRGAN/

two_in_one|2 years ago

It depends on what do you mean by 'open-source', along with training materials and full setup? That will be hard to find. Upscaling was popular like 10 years back. That's why there is no much interest today. Training in old style isn't that hard. But artifacts are popping up in all videos I've seen.

varispeed|2 years ago

That seems like a gimmick and I actually prefer SDR video that is not upscaled. There is something ugly about those AI treated videos. They look fake.

ls612|2 years ago

The RTX video upscaling feature works really well, there's a bug in the Firefox implementation that allows you to switch between native and upscaled side by side and the difference is striking. I don't have an HDR monitor so I can't tell you how well this new HDR feature works.

deergomoo|2 years ago

They are fake. Ultimately it’s not recovering lost detail, it’s making shit up

rixrax|2 years ago

I recently had some old super8 films shot by my parents scanned into 1080p resolution in ProresHQ. Because of the poor optics of the original camera, imperfect focus when shooting, poor lightning conditions, and general deterioration of the film stock, most of the footage won't get anywhere near what 1080p could deliver.

What I'd like to try at some point is to let some AI/ML model process the frames, and instead of necessarily scaling it up to 4k etc., 'just' add (aka magic) missing detail into 1080p version and generally unblur it.

Is there anything out there, even in research phase that can take existing video stock, and then hallucinate into it detail that never was there to begin with? What NVidia is demoing here seem like steps to that direction...

I did test out Topaz Video and DaVinci's built-in super resolution feature, both of which gave me a 4k video with some changes to the original. But not the magic I am after.

anjc|2 years ago

I also restored some Super 8 footage recently and had great success. The biggest win I had wasn't resolution, but slowing down the speed to be correct in DaVinci, and interpolating frames to make it 60fps using the RIFE algorithm in FlowFrames. I then used Film9 to remove shake, colour-correct, sharpen and so on.

Correcting the speed and interpolating frames added an amazing amount of detail that wasn't perceptible to me in the originals (albeit it was there).

All of this processing does remove some of the charm of the medium, so I'll be keeping the original scans in any case.

actionfromafar|2 years ago

An interesting thing about Super8: the resolution is generally very poor, but it can have quite the dynamic range. Also, with film in general (and video, but it's easier with film because you have global shutter) you can compensate motion blur and get more detail out which isn't visible when you look at the film frame by frame. And none of this needs AI.

Regarding hallucination, I agree with the sibling comment, the problem is that faces change. And with video, I'm not even sure the same person would have the same face in various parts of the video...

baq|2 years ago

there is AI tech to do this already. it has a slight problem, though: it adds detail to faces (this is marketing speak for completely changes how people look).

UberFly|2 years ago

Something like this will always change the original as it's guessing what should be there as it up scales. Only time will improve the guessing.

poglet|2 years ago

You could look into RTX Video Super Resolution

kwanbix|2 years ago

The HDR transformation was really impresive. The Upscale not so much. At least not in my monitor.

DrNosferatu|2 years ago

Speaking of which, Nvidia has built-in live AI upscaling on the Shield TV android box.

- Is there any stand-alone live AI upscaling / 'enhance' alternative for android or any other platform?

lagadu|2 years ago

The Shield is kind of an extreme outlier in today's environment. A device from 2015 that 9 years later is still one of the top tier choices in its (consumer) market is almost unheard of.

In fact it's reportedly the currently supported Android device out there with the longest support[0], it's crazy that mine still gets updates.

[0]https://www.androidcentral.com/android-longest-support-life-...

maxglute|2 years ago

Interested in this too. I replaced my shield with a steamlink to desktop that does upscaling which is very clunky.

DrNosferatu|2 years ago

So, should one buy a Shield TV today?

It’s pricey, and being so old, I fear it will soon be obsoleted…

bendergarcia|2 years ago

I think they should rephrase. It makes SDR appear HDR. It’s just making up information no? It’s not actually making it HDR just it appears to be HDR?

Alghranokk|2 years ago

Making up information? The same can be said for most commonly used modern compressed video formats. Just low bitrate streams of data that gets interpolated and predicted into producing what looks like high resolution video. AV1 even has entire systems for synthesizing film grain.

The way i see it, if the ai generated HDR looks good, why not? It wouldn't be more fake or made up than the rest of the video.

manmtstream|2 years ago

Now it will be absolutely impossible to accurately convey the artistic intent, when there's no way to know how it will look on consumer devices.

luma|2 years ago

Consumer devices have never been known for color accuracy and goes back a very long ways. The running joke in broadcast was that NTSC stood for "Never Twice the Same Color".

cmcconomy|2 years ago

I think we lost that battle with motion interpolation on consumer TVs

fsiefken|2 years ago

I wonder if AI can be used to extrapolate 4:3 to 16:9 format or to create stereoscopic video (for use in VR or 3D TV's)

LeoPanthera|2 years ago

During the brief moment that 3DTV was popular, almost all 3DTVs had a mode that could "convert" 2D to 3D, based on movement in the scene and other pre-learned cues. "Things that look like people should be in front of things that look like scenery", and so on.

I miss 3D. I loved it, and I was sad that it didn't catch on. It enjoyed a longer life in Europe, where 3D blu-rays were produced for a few more years after they stopped selling them in the US, and I imported and enjoyed several.

Maybe Apple's VR headset will be a 3D renaissance.

naasking|2 years ago

Possibly to some degree. They're doing crazy things with NeRF.

cm2187|2 years ago

So now we need to stop making fun of cops pressing the "enhance" button in films...

BlueTemplar|2 years ago

We're going to have at least one episode of those lawyer shows where they pressed enhance, and the neural network hallucinated something that wasn't there.

aaroninsf|2 years ago

The work I am interested in this broader domain is conversion (say, via some NeRF) of existing standard video into spatial video e.g. MV-HEVC for immersive experience on the Vision Pro etc.

renewiltord|2 years ago

This stuff is sick. If we had a real-time upscaler on a zoom telescope it would be a fantastic tool while traveling. I'd get a kick out of that.

genman|2 years ago

And what would fake detail in the real world give to you?

xcv123|2 years ago

Traveling to a real destination so that you can look at fake AI generated crap on a screen instead of the actual surroundings.

squarefoot|2 years ago

The upscaling doesn't seem particularly convincing, however, the HL2 RTX video on the same page definitely is.

tpreetham|2 years ago

HL2 RTX also has newer textures and other assets

4d4m|2 years ago

Feels like a misnomer, its really "HDR style" video. The source material does not have the dynamic range embedded, this is an effect filter.

zokier|2 years ago

> Using the power of Tensor Cores on GeForce RTX GPUs, RTX Video HDR allows gamers and creators to maximize their HDR panel’s ability to display vivid, dynamic colors, preserving intricate details that may be inadvertently lost due to video compression.

There is so much marketing BS in one small paragraph. For starters, generating(/hallucinating) data is imho the opposite of preserving anything. Then HDR is less associated with "intricate details" and more to do with color reproduction. Finally, video compression is the one thing that usually does not have problems with HDR, even the now venerable x264 can handle HDR content, generally it's almost everything else that struggles.

Of course in a true marketing tradition, none of the things are also strictly false. I'm sure there are many ways to weasel the claims.

nwellnhof|2 years ago

They claim to preserve color detail that was lost due to compression of the dynamic range. What's wrong with that?