• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.
  • We have made minor adjustments to how the search bar works on ResetEra. You can read about the changes here.

Mullet2000

Member
Oct 25, 2017
5,907
Toronto
So I recently did a new PC build and bought a 2070 Super to go with it. I've been wanting to mess with Raytracing but didn't really think about DLSS much until I tried it a day or two ago.

The two games I tried were Monster Hunter World and Metro Exodus. I'm running native 1440p.

Monster Hunter didn't look good with it on. Quite blurry compared to native 1440p, a noticable downgrade. So I spend a few minutes with it on and turned it off. Nice performance boost but not worth it for me.

Metro though, holy shit. With Raytracing on and Ultra settings, at 1440p my FPS was hanging out around 50 in the opening area. With DLSS on I'm hanging out around the 70s and I literally cannot tell any visual difference. If I manually change the in-game resolution from 1440p to below that the difference is clear as day but toggling DLSS on doesn't even register to me, visually.

That's...crazy? If more games start getting implementations this good it's basically going to be a no-brainer for a free massive performance boost. I see a lot of talk about Raytracing in general, but DLSS seems like a pretty killer feature too. Much better than standard resolution scaling - I barely ever see it mentioned though. I'll be really impressed if other games can get good implementations like Metro Exodus'.

So yeah if you have a card that can do it and haven't given it a chance, give it a shot. I expected it to be a buzzword and just look like regular-ass resolution scaling but was really impressed with Metro, at least. Really helps offset the Raytracing performance knee-cap.
 

pswii60

Member
Oct 27, 2017
26,673
The Milky Way
Wonder if next-gen consoles will have the tensor cores needed for this type of reconstruction?

Also with how DLSS works, sounds like quite a hands on process (for the platform holder). Currently each final game needs to go to a team at Nvidia who then put it through a supercomputer to create the AI data for that specific game. It's not just an 'on' switch.
 
Oct 27, 2017
744
New York, NY
Wonder if next-gen consoles will have the tensor cores needed for this type of reconstruction?

Also with how DLSS works, sounds like quite a hands on process (for the platform holder). Currently each final game needs to go to a team at Nvidia who then put it through a supercomputer to create the AI data for that specific game. It's not just an 'on' switch.
They will not. The tensor cores are due to workstation/compute workloads and Nvidia didn't want to design a gaming GPU without them so they looked for things to use them for. RDNA has no such thing.

it is ultimately possible to emulate on generic clusters if someone so wanted to though.
 

orava

Alt Account
Banned
Jun 10, 2019
1,316
How do the devs actually train the neural network? Do they just run the game x amount of time at very high resolution? If this works like "traditional" deep learning system, how well you can train it dictates also how good the end results will be.
 
Last edited:

RoboitoAM

Member
Oct 25, 2017
3,117
Makes me excited for the future of "faux-k" on PC. If we can't reliably hit 4k/60 without the most powerful hardware available, then it's certainly a good compromise.
 

Karak

Banned
Oct 27, 2017
2,088
How do the devs actually train the neural network? Do they just run the game x amount of time at very high resolution? If this works like "traditional" deep learning system, how well you can train it dictates also how good the end results will to be.
If I recall its actually sent to the nvidia team itself.
 

Dictator

Digital Foundry
Verified
Oct 26, 2017
4,931
Berlin, 'SCHLAND
I am very impressed with the way it looks in Wolfenstein Youngblood and Deliver us The Moon. Like any reconstruction it is imperfect in certain aspects (I do mention that I was still able to count an edge here or there when the motion was extreme), but in most movement and stills, it looks like the output resolution. Albeit without temporal AA ghosting, or trailing effects that TAA gets with transparencies. Also, it somehow manages to complete certain lines better than TAA does at native, which is very interesting.

One thing that I hope NV considers is going into their backlog and implementing the latest DLSS versions in to legacy titles that had the less competent original DLSS type. So it would be great if Metro Exodus, BF V, etc. all got the implementation that happens post Wolfenstein Youngblood.

Also, I hope NV is just as open as to their dev process on DLSS as they have been since GDC last year where they had a presentation on it. That was enlightening to see the challenges and directions they were going to go in... especially now as it seems pretty different and the resolve is much better.

If I recall its actually sent to the nvidia team itself.

I think Nvidia just needs the game code with various locations in the game and they generate the images for it. It requires that the game has motion vectors though that are generated, and generated correctly (many games apparently have broken ones, and the devs do not always see it).
 

Monster Zero

Member
Nov 5, 2017
5,612
Southern California
I am very impressed with the way it looks in Wolfenstein Youngblood and Deliver us The Moon. Like any reconstruction it is imperfect in certain aspects (I do mention that I was still able to count an edge here or there when the motion was extreme), but in most movement and stills, it looks like the output resolution. Albeit without temporal AA ghosting, or trailing effects that TAA gets with transparencies. Also, it somehow manages to complete certain lines better than TAA does at native, which is very interesting.

One thing that I hope NV considers is going into their backlog and implementing the latest DLSS versions in to legacy titles that had the less competent original DLSS type. So it would be great if Metro Exodus, BF V, etc. all got the implementation that happens post Wolfenstein Youngblood.

Also, I hope NV is just as open as to their dev process on DLSS as they have been since GDC last year where they had a presentation on it. That was enlightening to see the challenges and directions they were going to go in... especially now as it seems pretty different and the resolve is much better.



I think Nvidia just needs the game code with various locations in the game and they generate the images for it. It requires that the game has motion vectors though that are generated, and generated correctly (many games apparently have broken ones, and the devs do not always see it).

Are Wolfenstein and Deliver Us The Moon using the tensor cores for their implementation?
 

dex3108

Member
Oct 26, 2017
22,608
One thing that I hope NV considers is going into their backlog and implementing the latest DLSS versions in to legacy titles that had the less competent original DLSS type. So it would be great if Metro Exodus, BF V, etc. all got the implementation that happens post Wolfenstein Youngblood.

Also, I hope NV is just as open as to their dev process on DLSS as they have been since GDC last year where they had a presentation on it. That was enlightening to see the challenges and directions they were going to go in... especially now as it seems pretty different and the resolve is much better.

AndyBNV please show this to your DLSS team :D
 

pulsemyne

Member
Oct 30, 2017
2,641
The Raytracing/DLSS on BFV has a lovely bug on a certain screen where you wait between rounds on multiplayer. Everyone and the background are all sparkly. Only for between rounds though. Before and after are fine. Makes me think that that particular screen hasn't been properly through the DLSS process.
 

Deleted member 13560

User requested account closure
Banned
Oct 27, 2017
3,087
It's tech like this that gets me excited going forward. I'm glad that it's been improved upon. I feel, on its own, pure raw performance gains from architecture to architecture isn't going to cut it anymore. Tech that boosts performance while retaining graphical fidelity and image quality are going to be the real champions in the coming years. This is going to allow for more people to enjoy these games at fantastic graphical settings and great IQ without them having to empty their bank accounts.

Now don't get me wrong. I love tech that boosts raw hardware performance. I'm super excited about seeing MCM GPUs in the future. I feel that Hopper (NVDA's MCM architecture) will be a huge leap in raw performance... and also the end of consumer multiGPU setups.
 

rocket

Member
Oct 27, 2017
1,306
I am very impressed with the way it looks in Wolfenstein Youngblood and Deliver us The Moon. Like any reconstruction it is imperfect in certain aspects (I do mention that I was still able to count an edge here or there when the motion was extreme), but in most movement and stills, it looks like the output resolution. Albeit without temporal AA ghosting, or trailing effects that TAA gets with transparencies. Also, it somehow manages to complete certain lines better than TAA does at native, which is very interesting.

One thing that I hope NV considers is going into their backlog and implementing the latest DLSS versions in to legacy titles that had the less competent original DLSS type. So it would be great if Metro Exodus, BF V, etc. all got the implementation that happens post Wolfenstein Youngblood.

Also, I hope NV is just as open as to their dev process on DLSS as they have been since GDC last year where they had a presentation on it. That was enlightening to see the challenges and directions they were going to go in... especially now as it seems pretty different and the resolve is much better.



I think Nvidia just needs the game code with various locations in the game and they generate the images for it. It requires that the game has motion vectors though that are generated, and generated correctly (many games apparently have broken ones, and the devs do not always see it).

Just 100% the game & Wolfenstein Youngblood made me a believer not only in Raytracing but DLSS technology; and I haven't even started playing CONTROL. Been flipping it back and forth between VRS (NAS) and DLSS (Quality mode), while my i5 rig can run it at a locked 60 in ultra with NAS enabled 99% of the time, the load on the GPU was pretty high. There is this one area (Garage in the game) that will tank GPU performance, especially the radio room upstairs with the big white sofa; but that literally is the ONLY area in the game that will do that. Running the game with DLSS enabled dramatically decrease the load on the GPU and I am very hopeful my low-mid tier rig will be able to run Cyberpunk with raytracing on if I ran it will DLSS in performance mode.

However, I do notice some visual, checkerboard like, anomalies in a very specific area of the game in one area in the Catacombs.
 
Oct 25, 2017
4,427
Silicon Valley
They will not. The tensor cores are due to workstation/compute workloads and Nvidia didn't want to design a gaming GPU without them so they looked for things to use them for. RDNA has no such thing.

it is ultimately possible to emulate on generic clusters if someone so wanted to though.
I think you'll be surprised by the kind of AI / computer learning tech the new consoles will be packing :)
 

Jobbs

Banned
Oct 25, 2017
5,639
I tried it in Tomb Raider and in Anthem to see what it did and in both cases it just made the game seem very blurry. I shut it off.

Are there other games that do it better?
 

Monster Zero

Member
Nov 5, 2017
5,612
Southern California
I tried it in Tomb Raider and in Anthem to see what it did and in both cases it just made the game seem very blurry. I shut it off.

Are there other games that do it better?

Control, Wolfenstein Youngblood, and Deliver Us The Moon are much better newer versions of DLSS with the latter two really starting to make good on what was promised when they first revealed the tech.
 

Gitaroo

Member
Nov 3, 2017
8,004
Wolfenstein yb is the first to really use the tensor hw. Control was just a SW filter. Neither of those games have high frequency texture detail. I want to see games with high frequency texture detail with dlss.
 

Deleted member 11276

Account closed at user request
Banned
Oct 27, 2017
3,223
So I did some comparisons with Wolfenstein and Control DLSS.

First, let me start by telling you that Control is indeed not using the tensor cores. A user on Beyond3D used Nsight to track the tensor fp16 utilization, and in Control it was basically non-existent.

1cvtPRv.png


They then replaced the Control DLSS.dll file with the one from Wolfenstein, and tracked down FP16 usage again.

HTTji3j.png


As we can see, the Wolfenstein DLSS does use the tensor cores. But don't expect Control with the DLSS.dll from Wolfenstein to look any better, because every game has to be trained specifically to look good. So while the tensor cores are working and doing their job, it seems like the AI is not applied to the actual image. We have to wait for Nvidia to re-release DLSS for Control, if that ever happens. Credits to dorf from Beyond3D: https://forum.beyond3d.com/posts/2102385/

Because the tensor cores are not used in Control and the internal upscaler is faster, the reconstruction software impacts the game's performance.

DLSS@720p, upscaled to 1440p

controlscreenshot2020izjg3.png


720p native, upscaled to 1440p

controlscreenshot2020wdktk.png


The results are pretty good actually, the image quality is definately better. I could imagine a reconstruction software like that for the next gen consoles, since it runs pretty lightly on the shader cores. They will probably use VRS instead, I imagine.


Wolfenstein is a whole other league though, DLSS actually improves the performance by quite alot beyond using the lower resolution.

DLSS quality, upscaled to 1440p

wolfensteinyoungbloodo7k1s.png


manual scaling @0.66 upscaled to 1440p

wolfensteinyoungbloodzbksg.png


Image quality wise, I think it does a much better job than Control and delivers more performance as well. I think that shows how DLSS has evolved beyond Control.

The performance differences from DLSS in both, Control and Wolfenstein have been documented by Digital Foundry in their respective videos, so definately check them out if you haven't already.

What do you think? I think it's definately fascinating to see Wolfenstein and Control in a direct comparison. To be honest, I'm immensely hyped by DLSS lol, really cool tech.
 
Last edited:
Nov 8, 2017
13,110
I'm super interested to see them let us use DLSS 2x or whatever it used to be advertised as, where you start with a native resolution framebuffer then DLSS's it up to twice the resolution (4x pixel count) then downscales back down to the initial res.

I feel like it's never coming since they haven't mentioned it since the prerelease marketing for RTX cards, but it would be nifty af imo.
 
Last edited:

dgrdsv

Member
Oct 25, 2017
11,885
Because the tensor cores are not used in Control and the internal upscaler is faster, the reconstruction software impacts the game's performance.
Tensor cores usage will always impact the game's performance due to how tensor cores are running off the same data paths and registers / caches. It's unrealistic to expect tensor workload to run in parallel to general shading without any performance impact on the latter.
 

Deleted member 11276

Account closed at user request
Banned
Oct 27, 2017
3,223
Tensor cores usage will always impact the game's performance due to how tensor cores are running off the same data paths and registers / caches. It's unrealistic to expect tensor workload to run in parallel to general shading without any performance impact on the latter.
Isn't one of the main benefits of Turing that fp16 and fp32 workloads can run simultaneously without one interrupting the other?
 

dgrdsv

Member
Oct 25, 2017
11,885
Isn't one of the main benefits of Turing that fp16 and fp32 workloads can run simultaneously without one interrupting the other?
FP32 and INT can, and RT can alongside them. FP16 and tensor cores workloads will stop main SIMDs since there's just not enough bandwidth to run them simultaneously. They can still be scheduled in a way which will allow them to run while the main SIMD array is idling because of some reasons though - but it will still incur a performance penalty.
 

Deleted member 11276

Account closed at user request
Banned
Oct 27, 2017
3,223
Here's a quick comparison I made, native resolution vs DLSS balanced.

native

wolfensteinyoungbloodghj9e.png


DLSS balanced

wolfensteinyoungblood88j4u.png


This technology is so impressive. Look at that huge performance jump!