• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.
  • We have made minor adjustments to how the search bar works on ResetEra. You can read about the changes here.

Dekuman

Member
Oct 27, 2017
19,032
So I got my hands on a RTX 3070 today upgraded from an AMD Vega 64 and I'm just blown away with how amazing DLSS works, I know I'm LTTP with it but it feels like the biggest change we've had in graphics for years.

Now don't get me wrong console games are still going to look stunning when studios like ND and Guerrilla get games out, but imagine what visuals we could get if the game could just render at 1080p and DLSS it up to 4k instead. They could probably do so much more with RT.

When AMD get there DLSS stuff going will the consoles be able to support it?
Well they were too far in development when DLSS was introduced and it was by the rival vendor to the one they contracted to.

I don't think it's a mistake, in that context.
 

tokkun

Member
Oct 27, 2017
5,435
The restriction is entirely on Nvidia's end though. It's still officially in 'Early Access', and there's no way for developers to access the tech unless they've been hand-picked by Nvidia. If takeup is still slow after it's been made publicly available, then you'll have a point, but until then, hand-wringing how it's in so few games is premature.

The restriction is part of the reason I think it is probably non-trivial to implement DLSS and get good results. The reputation DLSS has right now is based on Nvidia tightly controlling who gets to use it, and probably also providing direct engineering support.

I suspect that if Nvidia started allowing anyone to use DLSS, that we would get a lot of crappy implementations with bad image quality and artifacting, and people's general perception of DLSS would get more negative. At least, that's the only explanation I can come up with for restricting access.
 

Vash63

Member
Oct 28, 2017
1,681
I suspect first party devs (especially Sony's) will have their own similar solutions. They were pretty far ahead on reconstruction with the PS4 Pro, and turning the solution for each game for fixed resolutions will be easier to perfect for each studio.
 
Oct 27, 2017
3,611
The restriction is part of the reason I think it is probably non-trivial to implement DLSS and get good results. The reputation DLSS has right now is based on Nvidia tightly controlling who gets to use it, and probably also providing direct engineering support.

I suspect that if Nvidia started allowing anyone to use DLSS, that we would get a lot of crappy implementations with bad image quality and artifacting, and people's general perception of DLSS would get more negative. At least, that's the only explanation I can come up with for restricting access.

You're right of course, but I think there's a good chance it's because it's still in development rather than because it's fundamentally flawed and will always require Nvidia supervision to implement.

Remember that 'DLSS 2.0' hasn't even been around for a year yet (and it was a pretty rough year at that).
 

ShinUltramanJ

Member
Oct 27, 2017
12,950
DLSS seems very impressive for the games that make use of it.

I'm curious as to what will happen when AMD comes up with their own version. That coupled with Smart Access Memory could prove to be even more impressive.
 

LCGeek

Member
Oct 28, 2017
5,891
That's actually the single area where consoles will be ahead though; the exclusives will focus on leveraging the SSD loading speed in ways that multiplatform games are less likely to do in an effort to hit a lowest common denominator in hardware terms.

They will only be ahead if devs gimp those multiplatforms. Most haven't in general I just drag and drop redoing symlinks or take big files the game consistently needs. It's not effecting current multi platforms released this year.

Even then it means little if you're using a ram drive like myself which is still faster than PS5 solution at 3800 at cl16 in a 64GB solution. Windows rarely wants more than 20GB so the rest is for the games, production, or networking.

PC gamers aren't restricted to SSD. This advantage will be wiped without a doubt when DD5 shows up based on secs and having DD4 in front of me.

DLSS seems very impressive for the games that make use of it.

I'm curious as to what will happen when AMD comes up with their own version. That coupled with Smart Access Memory could prove to be even more impressive.

We are likely to see DLSS and nvidia's sam solution before AMD has a viable solution to DLSS combined with their sam tech.
 

mario_O

Member
Nov 15, 2017
2,755
It's probably coming to consoles, once AMD finishes its 'Super Resolution' thing. I doubt it will be as good as DLSS, but should be better than checkerboard upscaling.
 

Dekuman

Member
Oct 27, 2017
19,032
DLSS has a hardware component and requires tensor cores, it's not 'free'. And you can't retroactively add it to old hardware without it possibly costing more to implement than the benefit.
 

ShinUltramanJ

Member
Oct 27, 2017
12,950
We are likely to see DLSS and nvidia's sam solution before AMD has a viable solution to DLSS combined with their sam tech.

Maybe, maybe not.

I just hope AMD's answer to DLSS allows for use with any game, and not specific ones. Even if it weren't as good, I'd rather the ability to use it with any game I want.
 

RedHeat

Member
Oct 25, 2017
12,716
Sony already has their own methods via Insomniac/Bluepoint's temporal injection. Checkboard rendering is still noticeably flawed, but I can see that improving this gen.
 

dadoes

Member
Feb 15, 2018
462
DLSS isn't "free", NV is paying for it with a ~20% die size increases.
They can do that because this die area is then used in ML markets where products are sold for thousands of USD.
Spending as many transistors on DLSS only would be a waste and a net loss.
When (and if) there will be other gaming applications for ML h/w then we will see it appear in gaming consoles.


DirectML is a compute API, you don't need any h/w to support it but you do need h/w to run ML fast enough for it to be usable for real time graphics.
All RDNA2 GPUs have support for packed math (FP16, INT8, INT4) but a) packed math is not ideal for ML, it just runs faster than FP32 but it's hard to say if that's fast enough for something like DLSS, b) it's the same SIMDs which do FP32 so you can do either shading or ML on them.
Also the heart of DLSS tech isn't API or h/w, it's the NN which was created, trained and is now supported by NV. Somebody must do the same for non-NV h/w, including consoles.
[/QUOTE

Lol so many acronyms.
 

LCGeek

Member
Oct 28, 2017
5,891
Maybe, maybe not.

I just hope AMD's answer to DLSS allows for use with any game, and not specific ones. Even if it weren't as good, I'd rather the ability to use it with any game I want.

I can vibe with that either vendor having a great upscaling solution is great to expanding their markets or making aspects of high end gaming more viable for people without tons of cash to spend.
 
Aug 30, 2020
2,171
AI based reconstruction is definitely the future (and present), but a console can never possess every cutting edge tech when it launches.

I'm damn impressed and very grateful that the console RT solution is as good as it is. I was worried consoles would be going completely without RT all this gen, which would slow PC adoption as well.
 

luoapp

Member
Oct 27, 2017
506
From what I read about DLSS, there is nothing special on the pc/console side, all the magic(training) is happening on the NVidia's server. I think it's possible for MS/Sony to add client side code for DLSS-like solution, if they and amd want to invest in the server side of the equation.
 

dgrdsv

Member
Oct 25, 2017
12,024
I suspect first party devs (especially Sony's) will have their own similar solutions. They were pretty far ahead on reconstruction with the PS4 Pro, and turning the solution for each game for fixed resolutions will be easier to perfect for each studio.
CBR reconstruction was invented by Ubisoft, TAA supersampling - which is basically what TAA injection/reconstruction is - was invented by Crytek I think? Both happened years before the release of PS4 Pro.
The thing with ML based reconstruction is that there's very little point in each studio maintaining their own NN for such reconstruction as it would be mostly universal. Thus a more possible scenario would be a middleware solution which will be made and maintained by either the platform holder or the engine provider (Epic, Crytek, DICE, etc)

They don't need to train on every game with dlss 2.0 though right?
They don't but further training and NN improvements will still lead to better results. This is partially why they've added the Ultra Performance mode to DLSS 2.1 a year later.

The point is that it's coming and it's hardware agnostic. DLSS 1.0 was 2/10 as well.
So hardware agnostic that they've used an RTX GPU with tensor cores for that presentation.
 

astro

Member
Oct 25, 2017
57,211
Why don't consoels have their own system level filters like AMD and Nvidia?

I get why there's no DLSS as they use AMD, but I don't get the above.
 

degauss

Banned
Oct 28, 2017
4,631
Eh , we get consoles for not much more than the price of a 1TB pcie SSD on its own, there are no "mistakes".
 

inner-G

Banned
Oct 27, 2017
14,473
PNW
It's impossible on their current architecture, AMD doesn't have a DLSS competitor.

It also uses dedicated hardware/cores, it's not just some software trick.
 
Oct 27, 2017
3,962
So I got my hands on a RTX 3070 today upgraded from an AMD Vega 64 and I'm just blown away with how amazing DLSS works, I know I'm LTTP with it but it feels like the biggest change we've had in graphics for years.

Now don't get me wrong console games are still going to look stunning when studios like ND and Guerrilla get games out, but imagine what visuals we could get if the game could just render at 1080p and DLSS it up to 4k instead. They could probably do so much more with RT.

When AMD get there DLSS stuff going will the consoles be able to support it?
Not a mistake when it was never an option
 
Oct 25, 2017
5,588
Racoon City
DirectML is a compute API, you don't need any h/w to support it but you do need h/w to run ML fast enough for it to be usable for real time graphics.
All RDNA2 GPUs have support for packed math (FP16, INT8, INT4) but a) packed math is not ideal for ML, it just runs faster than FP32 but it's hard to say if that's fast enough for something like DLSS, b) it's the same SIMDs which do FP32 so you can do either shading or ML on them.
Also the heart of DLSS tech isn't API or h/w, it's the NN which was created, trained and is now supported by NV. Somebody must do the same for non-NV h/w, including consoles.

Came into to give a concise answer, and saw you already did such
 

Deleted member 1238

User requested account closure
Banned
Oct 25, 2017
3,070
absolutely. three years from now it will be a BIG mistake and it will only get bigger.

obviously it wasn't necessarily available since the consoles went with AMD, but it's a dark shadow that will hang over this entire generation.
 

Deleted member 34714

User requested account closure
Banned
Nov 28, 2017
1,617
There is this assumption that DLSS is 100% on Switch but is there anything that firmly confirms that even at all or a hint? DLSS is not free like many said. It involves adding hardware cores to the chip thus increasing it's cost to manufacture and for what? Some games that may utilize it or need it? We're talking Nintendo here and I don't think tensor cores that is only beneficial to just certain games is expected of them.

And stop the DirectML stuff. It's an API utilizing whatever it gets it's hands on for the sake of DirectX on PC.
 

Rikimaru

Member
Nov 2, 2017
852
I do not think DLSS has a strict dependency on tensor compute. It's just a good upscaler algorithm.
Insomniac's temporal injection is rather good too.
 

laxu

Member
Nov 26, 2017
2,785
I could understand that being true given the similarities between TAA and DLSS. And yet, DLSS 2.0 adoption still seems low, even among TAA games. This is what makes me suspicious that it's not so trivial. Given the marketing value that DLSS has for Nvidia, I'd think they would be throwing engineers at companies to help them implement DLSS.

"Most" as in > 50%? That does not seem plausible.

There are still only about 20 games available that support DLSS 2.0. There are maybe another 10-20 that have announced plans to support it.

I haven't seen stats for 2020, but it looks like there were about 8000 games released on Steam in 2019 alone. So I would be surprised if even 1% of games supported DLSS in 2021.

I should have said "most AAA games". It's also going to be a feature of Unreal engines afaik so a lot more games could support it.
 

Gitaroo

Member
Nov 3, 2017
8,089
Checkerboard rendering is still a great solution IMO, all previous comparison between checkerboard rendering are 30fps, if the dev target 60fps, there would be twice as many frames blending in a second and less image breaking and result of better image quality in motion and less artifacts. If you look at ghost of tsushima, all the ghosting artifact from moving grass from TAA are significantly reduced. Other imagine breaking artifacts from checkerboard rendering should also be reduced.
 

dgrdsv

Member
Oct 25, 2017
12,024
Where did you get that from ?
According to this reddit post it's much less but I don't know how accurate it is
RTX adds ~1.95mm2 per TPC (tensors 1.25, RT 0.7) : hardware (reddit.com)
Similar estimation.
You have to remember that it's more than just the processing h/w itself which is needed in a GPU to add new units though.
You have to add new data paths which leads to adding more on-die storage and beefier caches.
The processing h/w itself is in range of 20% for RT+TC with TC being the bigger consumer here but if you account for the changes required to actually add this h/w into the GPU then it will be more than that.
So ~20% for TCs alone seems like a close enough estimate.
Then again this is based on Turing and we don't know how this has changed with new RTCs and TCs in Ampere.
 

platocplx

2020 Member Elect
Member
Oct 30, 2017
36,075
Checkerboard rendering is still a great solution IMO, all previous comparison between checkerboard rendering are 30fps, if the dev target 60fps, there would be twice as many frames blending in a second and less image breaking and result of better image quality in motion and less artifacts. If you look at ghost of tsushima, all the ghosting artifact from moving grass from TAA are significantly reduced. Other imagine breaking artifacts from checkerboard rendering should also be reduced.
Yeah and also they can render at a higher base resolution which also helps to make it come out clean.
 

Gitaroo

Member
Nov 3, 2017
8,089
Yeah and also they can render at a higher base resolution which also helps to make it come out clean.
Yup, as long as the pixel match the screen like 2160p checkerboard instead of 1800p checkerboard. I think days gone looks cleaner than ghost easily and I believe its still using a 2160p checkerboard setup.
 

Polyh3dron

Prophet of Regret
Banned
Oct 25, 2017
9,860
They will only be ahead if devs gimp those multiplatforms. Most haven't in general I just drag and drop redoing symlinks or take big files the game consistently needs. It's not effecting current multi platforms released this year.
What I am talking about are games that have core game design elements that are dependent on being able to load data from the SSD very quickly such as the PS5's upcoming Ratchet & Clank game, not quicker loading screens.
 

Vash63

Member
Oct 28, 2017
1,681
CBR reconstruction was invented by Ubisoft, TAA supersampling - which is basically what TAA injection/reconstruction is - was invented by Crytek I think? Both happened years before the release of PS4 Pro.
The thing with ML based reconstruction is that there's very little point in each studio maintaining their own NN for such reconstruction as it would be mostly universal. Thus a more possible scenario would be a middleware solution which will be made and maintained by either the platform holder or the engine provider (Epic, Crytek, DICE, etc)

I'm not talking about the company that invented it (though if you keep dropping the quality bar a basic bilinear upscale coudl be seen as reconstruction), I mean Sony was really the first to start using it at such high quality levels as to look almost native. Spiderman or God of War for example look very close to a native 4k on a PS4 Pro. It's not as good as DLSS which does similar feats at quarter resolution instead of half, but it's still impressive.
 

LCGeek

Member
Oct 28, 2017
5,891
What I am talking about are games that have core game design elements that are dependent on being able to load data from the SSD very quickly such as the PS5's upcoming Ratchet & Clank game, not quicker loading screens.

Any gamer get the benefits of solid IO from DD4 in any decent streaming engine besides loading screens. You're OS runs better too. PS5 games aren't the first to benefit from decent IO streaming engines are.

I'm welcome more dev flex in functionality but loading screens aren't the only benefit to solid IO peformance when you got it.

This is the first gen where almost no platform will be weak in this area for once.
 

The Lord of Cereal

#REFANTAZIO SWEEP
Member
Jan 9, 2020
9,808
Isn't AMD working on a DLSS competitor that will support RDNA2? If that's the case, then there's no reason it shouldn't support the Xbox consoles at the very least, and should most likely work on the PS5