• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.

T002 Tyrant

Member
Nov 8, 2018
8,934
With DLSS becoming Nvidia's new tool into helping drive performance by "cheating" via AI upscaling, do you feel Nintendo's interest will be piqued when it comes to either upscaling games to full 1080p (and full 720p in handheld mode) or even exploring being able to use the technology to bring their games in the future to up to 4k by providing their game's data (perhaps for Switch 2?) to Nvidia to bring their games that are performing at lower resolutions or lower framerates. In theory, could say a revision of the Switch (like a "Pro") be given tensor cores to enable DLSS Or is it more likely to be implemented in the Switch 2 with a clean, new Tegra chipset?

For me it sounds like a technology that Nintendo could benefit from in competing with more powerful systems by being smart and using DLSS for their games and third parties too. Do you think the next Switch revision will use DLSS, do you think Switch 2 will use or could benefit from DLSS?

I'm not claiming to be an expert here I'm curious to think if Nvidia and Nintendo will be using this tech in the future.
 

Pottuvoi

Member
Oct 28, 2017
3,062
If next Nintendo machine will have Turing based GPU there will be a lot more interesting features than DLSS.

Perhaps most important would be mesh shading.
 

EduBRK

Member
Oct 30, 2017
981
Brazil
DLSS tech still are a long way until is something like just do it. Is far from trivial to train the AI and the results are far from consistent.
 

JahIthBer

Member
Jan 27, 2018
10,376
DLSS sucks, take it from an Nvidia fan. Nintendo will just use temporal or checkerboard resolutions in the future if they wish.
 

Inugami

Member
Oct 25, 2017
14,995
DLSS tech still are a long way until is something like just do it. Is far from trivial to train the AI and the results are far from consistent.
This.

It's also often much easier and effective to use other "dumb" reconstruction techniques (checkerboard, or post process sharpening) than use specialty hardware that could do other more interesting effects.
 

OddSock

Member
Oct 27, 2017
126
South Africa
Obligatory

j2mz0e2wjug21.png
 

laxu

Member
Nov 26, 2017
2,782
I would say no. DLSS works better the more pixels it has to work with. I don't feel it does a great job at under 1440p resolutions. Since it's a reconstruction technique it can also cause some details to look different. I have tested it some with Shadow of the Tomb Raider and in that game I noticed that for example straw roofs on houses looked noticeably different and some ropes also were a bit different. Not really a better vs worse thing, just different.

To me one of the big benefits of the tech is how stable images it produces. Far less shimmering compared to normal AA solutions. I would recommend using it mostly if playing at 4K resolutions and wanting more performance say to run raytracing. That's not going to be a thing Nintendo Switch will do anytime soon, nor should it.

A more likely improvement is Nvidia's content aware sharpening technique coming to Switch as it produces good results when upscaling from a lower resolution.
 
Oct 27, 2017
5,618
Spain
Having an AI model that can be trained for all possible situations a game can show and have satisfactory results sounds like a complete nightmare, and the results of DLSS seem to confirm that.
 
OP
OP
T002 Tyrant

T002 Tyrant

Member
Nov 8, 2018
8,934
I would say no. DLSS works better the more pixels it has to work with. I don't feel it does a great job at under 1440p resolutions. Since it's a reconstruction technique it can also cause some details to look different. I have tested it some with Shadow of the Tomb Raider and in that game I noticed that for example straw roofs on houses looked noticeably different and some ropes also were a bit different. Not really a better vs worse thing, just different.

To me one of the big benefits of the tech is how stable images it produces. Far less shimmering compared to normal AA solutions. I would recommend using it mostly if playing at 4K resolutions and wanting more performance say to run raytracing. That's not going to be a thing Nintendo Switch will do anytime soon, nor should it.

A more likely improvement is Nvidia's content aware sharpening technique coming to Switch as it produces good results when upscaling from a lower resolution.

How about 720p to 1080p? Or 540p to 720p (handheld mode), let's say hypothetically if this was implemented in the Pro". I'm just wondering because AI can be trained and only get better with the years. Whereas other techniques don't improve.
 

laxu

Member
Nov 26, 2017
2,782
How about 720p to 1080p? Or 540p to 720p (handheld mode), let's say hypothetically if this was implemented in the Pro". I'm just wondering because AI can be trained and only get better with the years. Whereas other techniques don't improve.

I might have been wrong in my previous post. See https://www.nvidia.com/en-us/geforce/news/dlss-control-and-beyond/

It seems that the way they do it in Control is different from other games and might be more suitable for lower resolutions as well.
 
OP
OP
T002 Tyrant

T002 Tyrant

Member
Nov 8, 2018
8,934
I might have been wrong in my previous post. See https://www.nvidia.com/en-us/geforce/news/dlss-control-and-beyond/

It seems that the way they do it in Control is different from other games and might be more suitable for lower resolutions as well.

Hmmm I still think DLSS will be "good enough" especially if Nintendo worked with Nvidia to make sure their games upscaled well, and I can only imagine with frequent updates to DLSS the method can only get better in the coming years.

When it comes to the third party, it's a mixed bag - but for first-party with cooperation between Nvidia and Nintendo I reckon it could provide great results.
 

Thraktor

Member
Oct 25, 2017
570
I think there's some potential there for Nintendo, and I'd be surprised if they aren't at least exploring it for future hardware. There are a few reasons I'd expect it to work better on a Nintendo console than what we've been seeing this far:

  • More mature implementation. When Nvidia first announced DLSS they acted as if it was just some magical "AI" that could automatically upscale anything near-perfectly, which set people's expectations way too high. Like any other upscaling algorithm, the implementation is very important, and developers (and Nvidia) will have to figure out how best to set up and train the neural net to get the best results. All we've really seen are the first-gen implementations, so I'd expect results to improve over the next few years.
  • Fixed hardware. My gut feeling is that working on a console platform would make it much easier to get good results with DLSS, as you have far fewer variables to consider, with no graphics settings, only one output resolution (well, two in Switch's case), so configuring and training the neural net should be more straightforward.
  • Nintendo's games tend to be bright, colourful and contrasty. This should make things like edge-finding easier for the neural net.
I don't think DLSS is ever going to be the kind of magic bullet for upscaling that some people were expecting, but I do think there's a use there for Nintendo, and I'd be interested to see if they use it in the successor to the Switch.
 
OP
OP
T002 Tyrant

T002 Tyrant

Member
Nov 8, 2018
8,934
I think there's some potential there for Nintendo, and I'd be surprised if they aren't at least exploring it for future hardware. There are a few reasons I'd expect it to work better on a Nintendo console than what we've been seeing this far:

  • More mature implementation. When Nvidia first announced DLSS they acted as if it was just some magical "AI" that could automatically upscale anything near-perfectly, which set people's expectations way too high. Like any other upscaling algorithm, the implementation is very important, and developers (and Nvidia) will have to figure out how best to set up and train the neural net to get the best results. All we've really seen are the first-gen implementations, so I'd expect results to improve over the next few years.
  • Fixed hardware. My gut feeling is that working on a console platform would make it much easier to get good results with DLSS, as you have far fewer variables to consider, with no graphics settings, only one output resolution (well, two in Switch's case), so configuring and training the neural net should be more straightforward.
  • Nintendo's games tend to be bright, colourful and contrasty. This should make things like edge-finding easier for the neural net.
I don't think DLSS is ever going to be the kind of magic bullet for upscaling that some people were expecting, but I do think there's a use there for Nintendo, and I'd be interested to see if they use it in the successor to the Switch.

I couldn't agree more. In addendum I think that if they worked closely with Nvidia providing them with what they need to produce excellent results at least first party games could greatly benefit, and tg the more developers use it the better the GAN works. So in theory DLSS can only get better by the time a Nintendo console that uses it comes out?
 

ILikeFeet

DF Deet Master
Banned
Oct 25, 2017
61,987
Nintendo is pretty adverse to IQ techniques so I doubt they'll be using DLSS or the like. they'll more than likely be using the tensor cores for something else if at all
 
Oct 27, 2017
9,418
This is just wrong. DLSS is OK-to-great in a lot of games. And it keeps getting better unlike other sharpening techniques.

I disagree. DLSS is implemented in what 15 games right now. It was promised is the wave of the future a year-and-a-half ago RTX reveal conference because it was so easy to implement. What happened to "It just works."? Why is there a noticeable difference between native 4K and dlss if it supposed to look the same? It may look okay to great but it's because the game probably already looks good at the native resolution it's actually rendering and it up samples well. Right now for DLSS the way that it is implemented just doesn't work well.
 

speak_easy

Banned
May 12, 2018
38
Baltimore, Maryland
Well this isn't going to happen anytime soon seeing as though the Switch SOC has no Tensor cores to do the math required for DLSS or machine learning.

Correct me if I am wrong but DLSS is contingent on the Tensor cores, which is why the non RTX Turing GPU's (GTX1xxx) do not support DLSS.

I don't think we will see Tensor Cores on a SOC considering the power considerations, at least until 7nm or lower (i.e Apple's ML cores on its recent SOC's)
 

SharpX68K

Member
Nov 10, 2017
10,514
Chicagoland
Switch 2 in 2023 could have an Ampere-based Tegra SoC,
if Nvidia makes one by early 2021. I'm assuming Ampere-based RTX cards will be released in H2 2020.

This would mirror the timeline between Maxwell (2014),
Tegra X1 (early 2015) and Nintendo Switch in early 2017.
 

Gitaroo

Member
Nov 3, 2017
7,985
They should and can definitely add free style imagine sharpening + dlss filter + smaa. Mcable classic is already doing RIS upscaling to 1440p+ smaa I think. Nothing is stopping Nintendo from releasing an upgraded Dock with mcable classic like functionality. This can potentially stretch the switch life span longer and also make it compatible with all the current hw.
 

Lagspike_exe

Banned
Dec 15, 2017
1,974
Comparing DLSS 4K to native 4K is flawed. DLSS 4K should be compared to native resolution that has the same performance cost, whetever that is. It's similar to CB. 4K CB is not superior to 4K native but it's (in most cases) superior to the native resolution with equivalent perfromance cost.
 

pswii60

Member
Oct 27, 2017
26,657
The Milky Way
Comparing DLSS 4K to native 4K is flawed. DLSS 4K should be compared to native resolution that has the same performance cost, whetever that is. It's similar to CB. 4K CB is not superior to 4K native but it's (in most cases) superior to the native resolution with equivalent perfromance cost.
Nobody is expecting a non-native 4k image to look "superior" to a native 4k image. Surely. Native is obviously the reference. It doesn't get better than that.
 

Lagspike_exe

Banned
Dec 15, 2017
1,974
Nobody is expecting a non-native 4k image to look "superior" to a native 4k image. Surely. Native is obviously the reference. It doesn't get better than that.
I didn't properly articulate my thought. I wanted to say that 4K CB/DLSS shouldn't be compared to native 4K, it should be compared to native resolution that gives the equivalent amount of frames. For 4K CB, I believe that's 1440p-1600p or something around that. If it's superior in IQ to those native resolution, than its a good technique.
 

pswii60

Member
Oct 27, 2017
26,657
The Milky Way
I didn't properly articulate my thought. I wanted to say that 4K CB/DLSS shouldn't be compared to native 4K, it should be compared to native resolution that gives the equivalent amount of frames. For 4K CB, I believe that's 1440p-1600p or something around that. If it's superior in IQ to those native resolution, than its a good technique.
Yeah that I agree with!
 

TSM

Member
Oct 27, 2017
5,821
It doesn't make any sense for Nintendo to consider something that's going to use a lot of power and generate a whole lot of heat when they can just use a simple upscale. DLSS is just a way for Nvidia to throw a whole lot of spare processing power at performance problem. Even then the results vary from poor to not bad.
 

Reinhard

Member
Oct 27, 2017
6,590
Isn't this a pointless discussion since it requires Tensor cores which a low powered Switch 2 SOC won't have? Also, DLSS is mostly a failure, it looks far worse in the majority of games then just playing a game at a higher resolution and using 85% rendering.
 

SiG

Member
Oct 25, 2017
6,485
I have a feeling if Nintendo does decide to implement, it will utilize AI that "trains itself" on the fly when it executes upscaling, by utilizing previous frame data for higher-res buffers.
Well this isn't going to happen anytime soon seeing as though the Switch SOC has no Tensor cores to do the math required for DLSS or machine learning.

Correct me if I am wrong but DLSS is contingent on the Tensor cores, which is why the non RTX Turing GPU's (GTX1xxx) do not support DLSS.

I don't think we will see Tensor Cores on a SOC considering the power considerations, at least until 7nm or lower (i.e Apple's ML cores on its recent SOC's)
CONTROL already sort of does this with its reconstruction, and it doesn't even need to utilize Tensor cores.

As for Tensor cores on an Nvidia SOC, well...we'll have to wait and see. And even then, will Nintendo approve of its power draw?
 

dgrdsv

Member
Oct 25, 2017
11,843
DLSS require tensor array which is in fact fairly expensive in transistors. Unless there will be some use cases for tensor accelerated machine learning h/w which can be used in all games universally, I really don't see why a purely gaming machine would invest that much h/w into something which can be handled by TV scaler - with lower quality of course but still, for free.

CONTROL already sort of does this with its reconstruction, and it doesn't even need to utilize Tensor cores.
Control's DLSS isn't really DL and can run on anything - but NV does plan to use DL again on top of Control's non-DL implementation in the future. So it will require tensors again.
 

SharpX68K

Member
Nov 10, 2017
10,514
Chicagoland
Going from 256 Maxwell GPU cores in Switch, and heavily underclocked, to however many Ampere GPU cores that might go into a Tegra X4 SoC with higher clocks, should be pretty awesome.

Docked performance of a Switch 2 in 2023 should rival or surpass base OG PS4.

Switch 2 portable performance -- can't even guess that right now, with it being roughly 3.5 to 4 years away.
 

ILikeFeet

DF Deet Master
Banned
Oct 25, 2017
61,987
Going from 256 Maxwell GPU cores in Switch, and heavily underclocked, to however many Ampere GPU cores that might go into a Tegra X4 SoC with higher clocks, should be pretty awesome.

Docked performance of a Switch 2 in 2023 should rival or surpass base OG PS4.

Switch 2 portable performance -- can't even guess that right now, with it being roughly 3.5 to 4 years away.
just under the XBO in portable mode is my guess with docked sitting inbetween the PS4 and PS4 Pro
 

JeffGubb

Giant Bomb
Verified
Oct 25, 2017
842
I disagree. DLSS is implemented in what 15 games right now. It was promised is the wave of the future a year-and-a-half ago RTX reveal conference because it was so easy to implement. What happened to "It just works."? Why is there a noticeable difference between native 4K and dlss if it supposed to look the same? It may look okay to great but it's because the game probably already looks good at the native resolution it's actually rendering and it up samples well. Right now for DLSS the way that it is implemented just doesn't work well.

"It just works" was never about DLSS. That was about ray tracing, and that was in comparison to an artist baking lighting into every scene.

DLSS was always going to require a lot of time because Nvidia has to run the game for hours on its supercomputers to teach the algorithm. That's why Metro Exodus's DLSS implementation has improved since release. But Control is excellent at launch. Sure, it's a slow adoption, but teaching a game to add information that isn't present in the actual render based on an algorithmic understanding of what a game looks like is a smart idea that still has a ton of potential and a couple of solid examples.
 
Nov 8, 2017
13,086
DLSS require tensor array which is in fact fairly expensive in transistors. Unless there will be some use cases for tensor accelerated machine learning h/w which can be used in all games universally, I really don't see why a purely gaming machine would invest that much h/w into something which can be handled by TV scaler - with lower quality of course but still, for free.

I agree that the transistor budget is probably better spent on more turing/post-turing cores. Turing's Content Adaptive Shading should be a neat thing devs can take advantage of. If they're on 5nm euv type processes in 2022/2023 I could see them getting 768+ turing/post-turing cores at vaguely similar clockspeeds for a notable boost in performance, but it'll all depend on pricing I guess.
 

EvilBoris

Prophet of Truth - HDTVtest
Verified
Oct 29, 2017
16,678
The problem with DLSS for nvidia is that it potentially stifles the need for them to keep selling us new GPUs so frequently.
This is why I get the distinct impression they are holding back somewhat.
The potential for DL and how it can improve graphics and even game streaming is mind boggling.

have we ever heard from a developer about how the training process works?
 

pulsemyne

Member
Oct 30, 2017
2,635
"It just works" was never about DLSS. That was about ray tracing, and that was in comparison to an artist baking lighting into every scene.

DLSS was always going to require a lot of time because Nvidia has to run the game for hours on its supercomputers to teach the algorithm. That's why Metro Exodus's DLSS implementation has improved since release. But Control is excellent at launch. Sure, it's a slow adoption, but teaching a game to add information that isn't present in the actual render based on an algorithmic understanding of what a game looks like is a smart idea that still has a ton of potential and a couple of solid examples.
I agree. The DLSS in metro has improved massively over time to the point where it now looks really good. Even BFV has improved a lot as well. Controls was good from the very start.
DLSS has a lot of potential and I think time will prove it's worth.
 

Mr.Gamerson

Member
Oct 27, 2017
906
They should and can definitely add free style imagine sharpening + dlss filter + smaa. Mcable classic is already doing RIS upscaling to 1440p+ smaa I think. Nothing is stopping Nintendo from releasing an upgraded Dock with mcable classic like functionality. This can potentially stretch the switch life span longer and also make it compatible with all the current hw.

Yeah something like this would help give the Switch some extra boost in image quality since it tends to operate at lower
resolutions

without drastically affecting performance.
 

Pokemaniac

Member
Oct 25, 2017
4,944
I don't really see the point of DLSS. From the footage I've seen, it doesn't really seem to make a big difference, and it seems rather prone to artifacting.