• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.

Ionic

Member
Oct 31, 2017
2,734
It should add around 2.5 miliseconds to each frame on a 2060, or about 1.4ms on a 2080ti, per the rough estimates given by Nvidia.

Do we know if this is more or less a fixed cost? Would DLSS (assuming it was implemented) on like, CSGO take the same amount of time per frame as DLSS on a much more demanding game like RDR2?
 

Lightjolly

Member
Oct 30, 2019
4,573
Yes, it can be combined with DSR, but...

When you are playing at 1080p, you are rendering scenes internally at 1080p.
When you are using 2160p with DLSS, you are rendering internally at 1080p/1440p (depends on what the game uses, and you set).
So, using 4k DSR will just force the game to render internally at 1080p reconstruct --> to 2160p --> down sample back to 1080p. While this could result in better Image Quality, it will not magically give you better performance because the internal render is still at 1080p. In fact, there should be a slight cost due to the machine learning algorithms having to run through the tensor cores of your GPU first.

That's the whole magic behind DLSS. It allows you to gain performance by rendering at a lower resolution, while still producing a sharp and detailed immage without much noise. It's not perfect, but it is impressive.
I see, thanks for the detailed explanation, so I take this is mostly if you are having performance troubles with all the bells and whistles on at your resolution of choice
 
Nov 8, 2017
13,098
Do we know if this is more or less a fixed cost? Would DLSS (assuming it was implemented) on like, CSGO take the same amount of time per frame as DLSS on a much more demanding game like RDR2?
It should be a fixed cost as it does not have to do with scene complexity, just the amount of pixels AFAIK.
Interesting, thanks for that info.

Here's the slide they showed:


NVIDIA-GTC20-DLSS2-03_8BB280B9FEE6436FBE321FE83CA3814E.jpg


My memory was slightly off. 2060 Super was 2.55ms, and 2080ti was 1.5ms (for 4k output image).

So if you were using a 2060 Super to generate a 4k image from 1080p, and you were getting precisely 60fps (for argument's sake), that would be 16.67ms + 2.55ms = 19.22ms = 52 fps for the DLSS based supersampling.

I wouldn't expect this to be a totally ironclad number, but as a rough guide I guess it might be ok? The lower your initial framerate, the less the proportinoal impact is if it truly is a fixed cost.
 

shan780

The Fallen
Nov 2, 2017
2,566
UK
I wasn't impressed with DLSS in MHW, but it sounds like it works quite well in this game
 

z0m3le

Member
Oct 25, 2017
5,418
They can't even make its textures load correctly on consoles and most of the game is a linear corridor that would probably run at 100+ fps on an average PC at 1440p, I don't think it likely it'll have advanced features like DLSS2.0...

But I'd be more than happy if such features came to FF15.
FF15 did have DLSS, so FF7R having DLSS 2.0 seems logical.
 

Isee

Avenger
Oct 25, 2017
6,235
so I take this is mostly if you are having performance troubles with all the bells and whistles on at your resolution of choice

Yes, that's why it exists.
But I prefer Control/Young Bloods DLSS over Control/youngbloods TAA for example. ¯\_(ツ)_/¯
Though DLSS 2.0 can sometimes look a bit "oversharp" in my opinion. It's hard to show off.

Example with sharp contrasts in the scene:

1.) You can see reconstruction artifact on the main character (moving sideways).
2.) But that's not what I'm trying to show: Look at the, nearly white wall textures. They look a bit strange to me. Like there is a fine "grain" effect going on.
 
Last edited:

Letters

Prophet of Truth
Avenger
Oct 27, 2017
4,443
Portugal
That arstech article has a photo with a caption of "We're not allowed to talk about the Valve sections yet"

So I guess maybe it's a bit more than just cosmetics? Maybe a new bunker, new resources to deal with to get them?
This article talks a bit about that

DEATH STRANDING for PC is also receiving a hotly anticipated injection of Half-Life related content, offering surreal crossover missions with Valve's Half-Life universe. A familiar face has crossed over into the world of DEATH STRANDING, impersonating Bridges' employees and sending request emails prompting Sam to locate and secure companion cubes throughout the world. Completing these requests will reveal the mystery behind the friendly imposter and award Sam with useful new equipment and accessories.

I guess this stuff is also in the Epic store version?
 

Cheapstare

Banned
Oct 25, 2017
530
Are we looking at a scenario where DLSS implementation will be the major difference that will set PC and next-gen apart, at least early on?
GPU-wise we could have potentially weaker cards that push more pixels, enabling more RT and other effects at higher resolutions, etc.

And I imagine the gap will widen considerably with time. For all the talk of TFs an early mid-gen revision with tensor cores (or an equivalent solution) doesn't seem so far fetched to me.
 
Last edited:
OP
OP
dex3108

dex3108

Member
Oct 26, 2017
22,577
I asked Nvidia to push for DLSS support in RDR2 XD I know it's a long shot but there is always hope.
 

seldead

Member
Oct 28, 2017
453
Training isn't the same as having an AI model. Games still need to have the AI trained, but all games can use the same AI network for the training instead of needing to set up a new network for each game like DLSS 1.0.

This is not how deep learning models are specified. You wouldn't use a different architecture per game. Unless you mean something else by "new network"? Hyperparameters maybe?

DLSS 2.0 implements generalised training. It is no longer game specific. This is seemingly the key change from DLSS 1.0.

However, the developer still needs to implement it in the render pipeline (to my understanding similarly to TAA) and provide motion vectors as input. These steps have not changed.
 

mordecaii83

Avenger
Oct 28, 2017
6,860
This is not how deep learning models are specified. You wouldn't use a different architecture per game. Unless you mean something else by "new network"? Hyperparameters maybe?

DLSS 2.0 implements generalised training. It is no longer game specific. This is seemingly the key change from DLSS 1.0.

However, the developer still needs to implement it in the render pipeline (to my understanding similarly to TAA) and provide motion vectors as input. These steps have not changed.
That info was taken straight from Nvidia's explanation for how DLSS 1.0 works, so I guess you should ask them?
 

nrtn

Banned
Oct 31, 2017
1,562
This is black magic voodo cthulhu shit

OwLvAk1.png


Better IQ, double the performance.
 
Last edited:

The Omega Man

Member
Oct 25, 2017
3,902
Guys I have older Hardware:
i7 7700K @ 4.6
GTX 1080
16 RAM

Yay or nay? will I be able to make this game run and look good?
 

vitormg

Member
Oct 26, 2017
1,928
Brazil
Man, DLSS is black magic and I am really tempted to get a current gen Nvidia card because of it. The new generation can't come soon enough...
 

mordecaii83

Avenger
Oct 28, 2017
6,860
I can only find this post by NVIDIA where they specifically state that DLSS 1.0 required new model training for each new game and that DLSS 2.0 instead uses generalised non game specific training.
I'll look and see if I can find it, but the info I read about it specifically mentioned having to set up individual AI networks for each game, vs DLSS 2.0 still needing to individually add support for the game but every game using the same AI network to train the AI for the game.
 

Mecha

Avenger
Oct 25, 2017
2,479
Honduras
I'm stunned by the DF analysis, this is next gen PC gaming. Is it possible for any of the next gen machines to support something similar? Do you need specific hardware to run DLSS 2.0?
 

plagiarize

Eating crackers
Moderator
Oct 25, 2017
27,511
Cape Cod, MA
I'm stunned by the DF analysis, this is next gen PC gaming. Is it possible for any of the next gen machines to support something similar? Do you need specific hardware to run DLSS 2.0?
RTX 20 series. 30 series will support it to when they launch.

Just a reminder, that DLSS isn't black magic or 'free' performance. Remember how much more expensive the 20 series was? Part of that were the additional tensor cores on the GPU. That silicon is handling DLSS 2.0 and if you bought an RTX card you paid for it. Games that don't use DLSS leave all that silicon completely idle, doing nothing.

So while I love the approach, it uses specific silicon to achieve the result.
 

Mecha

Avenger
Oct 25, 2017
2,479
Honduras
RTX 20 series. 30 series will support it to when they launch.

Just a reminder, that DLSS isn't black magic or 'free' performance. Remember how much more expensive the 20 series was? Part of that were the additional tensor cores on the GPU. That silicon is handling DLSS 2.0 and if you bought an RTX card you paid for it. Games that don't use DLSS leave all that silicon completely idle, doing nothing.

So while I love the approach, it uses specific silicon to achieve the result.
Thanks for the answer and agree with your point, either it should become more standard or find a way for it to run it on the main hardware (don't know if it is feasible). Still very cool tech that should drive more innovation in the optimization side.
 

DjRalford

Member
Dec 14, 2017
1,529
DLSS has the ability to extend the life of the 20xx series for some time, provided Nvidia do not pull something shitty like make it exclusive to the newer cards only when they arrive.
 

plagiarize

Eating crackers
Moderator
Oct 25, 2017
27,511
Cape Cod, MA
Regarding how the tech started?
DLSS 2.0 looking better than native resolution is a bit black magic.
*shrug*. Nvidia promised it would get better over time when they announced the feature. Given the nature of machine learning its not too surprising that it did. And again, a decent area of the GPU was dedicated to tensor cores that are otherwise doing *nothing* when you are playing a game. DLSS fell woefully short of what Nvidia promised initially, but fortunately they got it to where they promised in less than a year. Again, if you bought an RTX card, you literally paid extra in part because of those tensor cores.

TLAA has always suffered from flaws inherent to the technique. DLSS leverages cores designed to accelerate machine learning derived code, and machine learning's strength is in image processing. If you bought an RTX card, you paid more, so you could use this feature if the game supported it. My $1200 graphics card, running the game faster than a less expensive (to the tune of multiple hundreds of dollars) top of the line Radeon card? That doesn't feel like magic to me.

So wait in game do I lower the resolution then enable it? Or just enable it and it does the rest?
You just enable it in game.
 
Last edited:

Galaxea

Member
Oct 25, 2017
3,405
Orlando, FL
Dlss 2.0 is insanity. I can't wait to upgrade my 1660ti to a 3070 whenever it comes out. It's one of the reasons I haven't played through Control yet. That and Cyberpunk should be fun to experience using DLSS to utilize all the rtx features.
 

Isee

Avenger
Oct 25, 2017
6,235
Given the nature of machine learning its not too surprising that it did.

t's an AI based tech. It improves with training, it can get better.

More than just AI learning. They scratched how the tech worked and started from a different approach, especially as people weren't satisfied with previous results.
No "shrug" or "it was always supposed to get better". The early reception on the quality of DLSS played a significant role in why DLSS 2.0 started development.

So wait in game do I lower the resolution then enable it? Or just enable it and it does the rest? This black magic has me confused lol

You just enable it.
 

Uhtred

Alt Account
Banned
May 4, 2020
1,340
So wait in game do I lower the resolution then enable it? Or just enable it and it does the rest? This black magic has me confused lol

With DLSS you set your target resolution as normal (porbbaly the native resolution of your monitor), then turn on DLSS. DLSS will lower internal resolution and reconstruct on it's own.

I think performance mode is a x4 pixel count improvement, while quality is a x2?. So if you set your target resolution to 4k, DLSS performance mode actually runs the game at 1080p, while quality mode will render it at 1440. If your target resolution is say 3440x1440p (an ultra wide monitor) then DLSS performance would render it at 1720 x 720p and quality at 2293 x 960 I believe. And 1080p would be rendered at 540p performance 720p quality.

Someone correct me if I'm wrong.
 

Tahnit

Member
Oct 25, 2017
9,965
if DLSS 3.0 can run internal res like 1080P or lower and make it look as good as 4k we are getting into the "why get a new graphics card" territory.
 

Uhtred

Alt Account
Banned
May 4, 2020
1,340
if DLSS 3.0 can run internal res like 1080P or lower and make it look as good as 4k we are getting into the "why get a new graphics card" territory.

It's almost there. performance mode is 1080p up to 4k and it looks comparable to native + TAA. not quite as good, but not considerably worse either.
 

J-Skee

The Wise Ones
Member
Oct 25, 2017
11,102
Okay, after watching the Digital Foundry video... it might be super premature for me to say this considering we haven't fully seen what PS5/Series X games can do, but DLSS 2.0 is the best "next-gen" feature I've seen since Share Play on the PS4. If you're aware that it exists & you have the means to get it, I can't understand why you would get an AMD GPU over an Nvidia one.

At the same time, it's kinda scary because you can ride that GPU out longer with DLSS enabled, which would hurt Nvidia's pockets. At what point do that stop supporting certain cards? And with that in mind, I can understand why Sony & Microsoft would never want to invest in it in consoles. Console lifespan could go up significantly.
 

Uhtred

Alt Account
Banned
May 4, 2020
1,340
Okay, after watching the Digital Foundry video... it might be super premature for me to say this considering we haven't fully seen what PS5/Series X games can do, but DLSS 2.0 is the best "next-gen" feature I've seen since Share Play on the PS4. If you're aware that it exists & you have the means to get it, I can't understand why you would get an AMD GPU over an Nvidia one.

At the same time, it's kinda scary because you can ride that GPU out longer with DLSS enabled, which would hurt Nvidia's pockets. At what point do that stop supporting certain cards? And with that in mind, I can understand why Sony & Microsoft would never want to invest in it in consoles. Console lifespan could go up significantly.

Game developers will continue to push graphics. Even with a 100% improvement in frame rate some visual effects will still mean you will need more raw GPU power to render them. Ray tracing is one right off the bat. The performance of some implementations is bad enough that even a 2080ti can't run games at 60 FPS with it on. DLSS makes the game playable, but you can easily imagine a more powerful GPU will allow you to enable more graphics effects and ray tracing features WHILE still maintaining a high frame rate.
 

plagiarize

Eating crackers
Moderator
Oct 25, 2017
27,511
Cape Cod, MA
With DLSS you set your target resolution as normal (porbbaly the native resolution of your monitor), then turn on DLSS. DLSS will lower internal resolution and reconstruct on it's own.

I think performance mode is a x4 pixel count improvement, while quality is a x2?. So if you set your target resolution to 4k, DLSS performance mode actually runs the game at 1080p, while quality mode will render it at 1440. If your target resolution is say 3440x1440p (an ultra wide monitor) then DLSS performance would render it at 1720 x 720p and quality at 2293 x 960 I believe. And 1080p would be rendered at 540p performance 720p quality.

Someone correct me if I'm wrong.
I think you're right, except some games have a third mode (balanced) which is throwing me. In that case I think it goes 1800p, 1440p, 720p for quality, balanced and performance respectively.
 

seldead

Member
Oct 28, 2017
453
I'll look and see if I can find it, but the info I read about it specifically mentioned having to set up individual AI networks for each game, vs DLSS 2.0 still needing to individually add support for the game but every game using the same AI network to train the AI for the game.

By setting up an individual model for each game they mean fit in a new model on the training data for that game. That was the procedure for implementing DLSS 1.0. The person you initially replied was correct on what changed in DLSS 2.0.

As I said, the extra work per game to add DLSS 2.0 support is in the supplying of motion vectors as model input for inference in the render pipeline, not for training. Training for DLSS 2.0 is on generalised non game specific content. This is common practice in modern ML and is known as transfer or meta learning depending on the relationship of the initial task to the downstream task.
 
Last edited: