what card?It's a game changer for sure.
I'm blown away with what they've done in Death Stranding at the moment. 4k DLSS running around locked 60fps, everything set on ultra.
what card?It's a game changer for sure.
I'm blown away with what they've done in Death Stranding at the moment. 4k DLSS running around locked 60fps, everything set on ultra.
The hidden magic is that at some remote place and time, NVIDIA and the devs computed a sort of ground truth at much higher resolution than 4K, which reveals much higher detail than a 4K image can. Then, the DLSS algorithm uses this information to reconstruct images to a given resolution, using details deduced from an image that has more detail than a native 4K image. The result is that certain aspects of the image are enhanced beyond the native 4K quality because DLSS has more information about the detail available than native 4K, which is what causes the image to look better than native in certain respects. It really is a wonderful application of machine learning imo.Yeah I don't get it, personally. I don't really understand how it's possible.
I play a few games with DLSS 2.0 which didn't support it initially (BFV) and the difference is stark.
Rarely does something affect performance as much as this has, especially with raytracing enabled.
Remember disc drives? We're bringing them back!
Yeah, was announced last month.
Good point if next Nintendo console have DLSS maybe they will compete having all third games...I just hope that it's featured in the next Switch hardware. That's a gamechanger for Nintendo if it is.
I've actually had to decide between these two products as well and DLSS pushes me towards a next gen Nvidia GPU too.It is what convinced me to go with the 3XXX GPU upgrade over a PS5 at launch. Space magic at its best.
What videos do you recommend to see the potential of DLSS? I've seen the feature discussed but I wanna see for myself.
Digital Foundry's video on DLSS 2.0 in Control is one of the best you can watch I think:What videos do you recommend to see the potential of DLSS? I've seen the feature discussed but I wanna see for myself.
Yeah, was announced last month.
Cyberpunk 2077: Ray-Traced Effects Revealed, DLSS 2.0 Supported, Playable On GeForce NOW
See a new ray-traced trailer and screenshots, learn more about the PC ray tracing effects, and discover how you’ll be able to play Cyberpunk 2077 across all your devices with GeForce NOW.www.nvidia.com
I was hesitant on upgrading my 1080 Ti but now I feel basically obligated to buy an RTX card.
Digital Foundry's video on DLSS 2.0 in Control is one of the best you can watch I think:
Edit: Beaten lol, at least we have established consensus!
It has to be supported by the game and you need an RTX GPU. That's what they meant I think.
That's what i thought, thanks. How many games have support so far?The DLSS model needs to be trained on a per game basis so Nvidia needs push driver updates if they added support for specific games.
Well it's a 20 yr partnership and Nintendo saved Nvidia's Tegra X1 from being stuck on the floundering Shield lineWould be a game changer if the next Nintendo console had it built in
They seem to have a pretty good relationship with Nvidia
Yeah I don't get it, personally. I don't really understand how it's possible.
The DLSS model needs to be trained on a per game basis so Nvidia needs push driver updates if they added support for specific games.
I think that's not the case anymore for DLSS 2.0, if I understood the Digital Foundry analysis correctly.The DLSS model needs to be trained on a per game basis so Nvidia needs push driver updates if they added support for specific games.
It won't be, even XSX at peak performance using rapid packet math to produce DLSS image if 8bit is possible, would be 4x slower than DLSS on the RTX 2060, and that already requires about 3ms of time to create a 4K image, with a 16ms window for 60fps, it's not something next gen consoles could do at 60fps, the XSX would take over 10ms to produce a 4K reconstruction image, it might not even be possible for 30fps (33ms window). This really requires the fixed function efficiency of tensor cores. Though a less effective DLSS 1.9 was done on cuda cores, which is not fixed function, the errors it makes are noticeable to the naked eye, but combined with sharpen, next gen consoles might be able to reconstruct 1440p+ into 4K without being too noticeable.Indeed it really feels like magic, moreover, I've seen the difference between DLSS 1.x vs DLSS 2.0 on control and it really made a difference.
Wolfenstein Youngblood also really looked amazing with DLSS and now I pretty much play every game with it.
DirectML so far hasn't been demonstrated to run real-time on AMD hardware. Hopefully, that will be demonstrated soon and is also "baked-in" on the next-gen consoles (but I doubt it).
DLSS IS SUPPORTED ON GEFORCE NOW?!?!?! 👀👀👀Yeah, was announced last month.
Cyberpunk 2077: Ray-Traced Effects Revealed, DLSS 2.0 Supported, Playable On GeForce NOW
See a new ray-traced trailer and screenshots, learn more about the PC ray tracing effects, and discover how you’ll be able to play Cyberpunk 2077 across all your devices with GeForce NOW.www.nvidia.com
I was hesitant on upgrading my 1080 Ti but now I feel basically obligated to buy an RTX card.
If the 3080 Ti / 3090 has more tensor cores (rumoured a lot more), does it make the implementation even more performant? Like, reconstruct from 720p to 4k? Or just better image quality?
Nice, got the same card. Definitely will replay it.
What you also need to take into account is that DLSS can be fundamentally better than native rendering in certain specific details (for example hair rendering) due to its ability to reconstruct detail from a much higher resolution ground truth image. However, the algorithm also has artifacts, especially in motion, so you don't get a super-resolution result overall, and it is typically quite comparable to native res on average.It took me a while to understand it as well. I don't think the media has done a good job of communicating it. Here is what I have concluded:
The "native resolution" results are using some kind of high-performance but low-quality anti-aliasing method. Typically something like TAA (temporal anti-aliasing). TAA introduces some unintended blurring of details.
DLSS 2.0 is actually very similar to TAA in concept, but it uses a neural network rather than a pre-baked algorithm. The neural network turns out to be far superior to the current TAA algorithm and causes much less unintended blurring of details. It is so much better than traditional TAA that the image may look better even when rendered at a lower resolution and upscaled.
Ultimately the reason DLSS 2.0 can sometimes look better than native is because of how bad TAA is. If you were to render the native resolution image using a higher quality form of anti-aliasing, such as SSAA, then the native resolution image would always win in quality. However the performance cost of SSAA is very high.
What DLSS 2.0 has exposed is that our existing algorithms for doing TAA are woefully sub-optimal. In theory we could figure out a different algorithm that worked as well as DLSS 2.0, but without a neural network. However finding this algorithm is easier said than done.
brilliant post!It took me a while to understand it as well. I don't think the media has done a good job of communicating it. Here is what I have concluded:
The "native resolution" results are using some kind of high-performance but low-quality anti-aliasing method. Typically something like TAA (temporal anti-aliasing). TAA introduces some unintended blurring of details.
DLSS 2.0 is actually very similar to TAA in concept, but it uses a neural network rather than a pre-baked algorithm. The neural network turns out to be far superior to the current TAA algorithm and causes much less unintended blurring of details. It is so much better than traditional TAA that the image may look better even when rendered at a lower resolution and upscaled.
Ultimately the reason DLSS 2.0 can sometimes look better than native is because of how bad TAA is. If you were to render the native resolution image using a higher quality form of anti-aliasing, such as SSAA, then the native resolution image would always win in quality. However the performance cost of SSAA is very high.
What DLSS 2.0 has exposed is that our existing algorithms for doing TAA are woefully sub-optimal. In theory we could figure out a different algorithm that worked as well as DLSS 2.0, but without a neural network. However finding this algorithm is easier said than done.
I think that's not the case anymore for DLSS 2.0, if I understood the Digital Foundry analysis correctly.
Good point if next Nintendo console have DLSS maybe they will compete having all third games...
What you also need to take into account is that DLSS can be fundamentally better than native rendering in certain specific details (for example hair rendering) due to its ability to reconstruct detail from a much higher resolution ground truth image. However, the algorithm also has artifacts, especially in motion, so you don't get a super-resolution result overall, and it is typically quite comparable to native res on average.
Well that's false. It could be just as good or better, why make a statement like that?No.
Even if AMD does come up with something similar in the future, it's not gonna be as good as Nvidia.
Well that's false. It could be just as good or better, why make a statement like that?
Well that's false. It could be just as good or better, why make a statement like that?
I actually cannot get over these Control and Death Stranding videos. Apparently DLSS not only runs at like up to 100% more FPS, it also looks as good or better than native 4K? This all sounds too good to be true. I got an AMD RX 5700 XT - what do I do? By the looks of it, there's no AMD alternative on the market or on the horizon for that matter. Assuming AMD were to develop an equally good equivalent down the line, would my 5700 XT even be able to run it?
Frankly, I'm looking at selling my 5700 XT off and buying one of the next gen Nvidia GPUs once they hit the market. I've not even had my GPU for a year and it's been good but please someone tell me if there's a catch with DLSS. At this rate, I'm inclined to hold off on Cyberpunk until I got a GPU with DLSS.
Is there a chance it'll not have widespread support because next gen consoles run on AMD hardware? Not like DLSS dying would be a good thing, but if it's not be expected to make a breakthrough, then it's obviously not worth selling my 5700 XT off.
edit: This is what I'm talking about. UNREAL.
Not true anymore. That was the case with DLSS 1.0.
DLSS 2.0 does not a use a game-specific neural network.
I think that's not the case anymore for DLSS 2.0, if I understood the Digital Foundry analysis correctly.
What you have to understand about DLSS, is until 2.0 was released just a handful of months ago, it was completely unproven, PS5 and XSX were already designed, they won't have tensor cores, but rapid packet math can be a slow solution that could possibly offer 30fps version of DLSS type of upscaling, though PS5 might still be too slow to do so, it's really borderline what they can squeeze out of these boxes, because even a RTX 2060 can perform the math between 4 and 8 times faster, depending on if 8bit precision or 16bit precision is required.It really seems that miraculous.
And I hope both Xbox Series X and Playstation 5 already have something comparable under their pockets for the developers to leverage.
DirectML will at least be in for XSX, providing similar AI reconstruction of the image starting from much lower resolutions, but I also have my doubts it could ever reach current DLSS 2.0 benefits.What you have to understand about DLSS, is until 2.0 was released just a handful of months ago, it was completely unproven, PS5 and XSX were already designed, they won't have tensor cores, but rapid packet math can be a slow solution that could possibly offer 30fps version of DLSS type of upscaling, though PS5 might still be too slow to do so, it's really borderline what they can squeeze out of these boxes, because even a RTX 2060 can perform the math between 4 and 8 times faster, depending on if 8bit precision or 16bit precision is required.
Well you're lucky. I tried DLSS with Monster Hunter World, it looked noticeably worse than native resolution + no DLSS at 1440p.First game I played on my new PC was Control with DLSS and all the raytracing features on, and it was revelatory. And that was only DLSS 1.0.
I don't need someone to tell me what to think, I watched the comparison video that is literally just a comparison with no opinion.