ILikeFeet

DF Deet Master
Banned
Oct 25, 2017
61,987


Summary forthcoming
  • DLSS 1.9 broke down under transparencies, resulting in ghosting and flickering
  • higher resolution fixes subpixel breakup at the cost of higher cost (because higher res)
  • 2.0 fixes subpixel detail issues like flickering
  • movement also breaks 1.9
  • ghosting trail still exists in 2.0 but is greatly reduced
  • pseudo-random micro-detail (rock flecks, skin pores) is better preserved in 2.0
  • text textures has higher contrast but is less legible than native res
  • 2.0 can have high contrast edge breakup at times, but not really visible at regular zoom
  • DLSS doesn't blur micro-detail in motion unlike TAA
  • slight haloing with 2.0 (more visible at 800% magnification)
  • SHARPENING IS TWEAKABLE IN THE SDK
  • 1080p to 4K is 130% higher performance than native 4K in performance mode (4x scale) on 2080Ti
  • 1440p to 4K is 67% better performance
  • 2.0 cost more than 1.9, but in practice, it's marginally faster
  • 1080p to 4K through DLSS has 11% lower performance than 1080p to 4K with regular upscaling and TAA
  • on a 2060, that same test shows DLSS 15% lower than 1080p upscaled
  • DLSS more expensive on lower end gpus
  • 540p to 1080p DLSS resolves subpixel detail that a native 1080p image cannot
  • Alan Wake ran at 540p on the 360
  • halo artifact is more noticeable at lower resolution
  • on a 2060, max everything, 720p to 1440p, runs in 40s in stressful environments (good for variable refresh rate monitors tho)
  • using Alex's optimized settings from before, drops go as low as the mid 50s
  • dropping to reconstructed 1080p, you'll stay above 60fps
  • best image reconstruction solution so far, according to Alex
Full Article:

www.eurogamer.net

Remedy's Control vs DLSS 2.0 - AI upscaling reaches the next level

Consider this. Ten years ago, Digital Foundry was mulling over Alan Wake's 960x540 resolution (actually 544p!) and wond…

Shame this won't be on consoles. Could be a megaton.
 
Last edited:

MasterC12TF

Alt Account
Banned
Mar 31, 2020
78
With techniques like this becoming standard for the next generation, it looks like the only distinguishable difference between platforms will be in loading times. It will make for pretty boring DF versus video's in the future :(

IMO natively rendering 4K is not a very creative way to spend any excess TFs
 

Zedark

Member
Oct 25, 2017
14,719
The Netherlands
Impressive stuff. I like the fact that Alex also speaks highly of combining ray tracing with DLSS 2.0. That's a hugely potent combination for low end NVIDIA GPUs and for a potential Switch 2.
 

Zedark

Member
Oct 25, 2017
14,719
The Netherlands
With techniques like this becoming standard for the next generation, it looks like the only distinguishable difference between platforms will be in loading times. It will make for pretty boring DF versus video's in the future :(

IMO natively rendering 4K is not a very creative way to spend any excess TFs
It's still uncertain whether next gen consoles will support hardware accelerated deep learning, I believe, so I don't think we can assume either XSX or PS5 will use a technology similar to this (nevermind the fact that AMD haven't announced anything like this yet I believe).
 

BeI

Member
Dec 9, 2017
6,056
Shame this won't be on consoles. Could be a megaton.

There might be good reconstruction alternatives eventually, if not immediately. For now though, seems like a potential game changer for current RTX cards going into next-gen; seems like RX 5xxx cards might not fare as well.
 

Mullet2000

Member
Oct 25, 2017
5,930
Toronto
DLSS 2.0 is pretty amazing. Should really help keep the current RTX cards relevant for a long time even has next gen goes on.
 

CarbonCrush

Member
Oct 27, 2017
1,156
With techniques like this becoming standard for the next generation, it looks like the only distinguishable difference between platforms will be in loading times. It will make for pretty boring DF versus video's in the future :(

IMO natively rendering 4K is not a very creative way to spend any excess TFs
This makes no sense as unspent rendering resources will instead be spent elsewhere. There will still be differences.
 

Deleted member 4970

User requested account closure
Banned
Oct 25, 2017
12,240
amazing

py7xfhH.png
 

Stacey

Banned
Feb 8, 2020
4,610
The only way to tell the difference between native and dlss 2.0 is to check your performance........

Jesus, this tech is astounding.
 

Xando

Member
Oct 28, 2017
27,709
There might be good reconstruction alternatives eventually, if not immediately. For now though, seems like a potential game changer for current RTX cards going into next-gen; seems like RX 5xxx cards might not fare as well.
AMD has not yet announced any hardware accelerated deep learning if i understood correct. So unless they keep it secret the earliest we're gonna get it on consoles would be the mid gen refresh (if there is one).
 

Sho_Nuff82

Member
Nov 14, 2017
18,592
If AMD did have an alternative to this, wouldn't they be trying to get out public demos so that everyone doesn't scoop up NVIDIA cards in the interim?

Hoping for next gen "secret sauce" from RDNA2 might pan out, and maybe Sony/MS have their own custom solutions, but right now this looks NVIDIA exclusive.
 
Oct 26, 2017
9,859
There might be good reconstruction alternatives eventually, if not immediately. For now though, seems like a potential game changer for current RTX cards going into next-gen; seems like RX 5xxx cards might not fare as well.

Maybe, but i don't expect AMD to match both Nvidia raytracing and DLSS anytime soon, assuming they are working on the latter as well.
 

BeI

Member
Dec 9, 2017
6,056
Impressive stuff. I like the fact that Alex also speaks highly of combining ray tracing with DLSS 2.0. That's a hugely potent combination for low end NVIDIA GPUs and for a potential Switch 2.

Switch 2 might not even need much more power to take current games from 1080p to 4k DLSS performance mode. Perhaps a better CPU would be nice though.
 

Trago

Member
Oct 25, 2017
3,614
The results are just bananas.

This will be great for general performance gains, but also excellent for very graphically demanding titles on PC next gen.

I do wonder if this sort of tech could be possible on a theoretical "Switch 2" or something. Implementing such a thing would make porting next gen games over interesting.
 
Oct 27, 2017
5,618
Spain
If AMD did have an alternative to this, wouldn't they be trying to get out public demos so that everyone doesn't scoop up NVIDIA cards in the interim?

Hoping for next gen "secret sauce" from RDNA2 might pan out, and maybe Sony/MS have their own custom solutions, but right now this looks NVIDIA exclusive.
Microsoft has been experimenting with ML upscaling throught Direct ML in the context of Direct X 12 Ultimate. It will obviously be running on RDNA 2. So it might become a standard Direct X 12 U feature.
 

DavidDesu

Banned
Oct 29, 2017
5,718
Glasgow, Scotland
That's insane (the example above). Sooo we can have super powerful GPUs only needing to render at 1080p pushing visuals and frame rates to ludicrous degrees, and up scaling unnoticeable to 4K. Hahaha wow.
 
Oct 25, 2017
14,741
That's insane (the example above). Sooo we can have super powerful GPUs only needing to render at 1080p pushing visuals and frame rates to ludicrous degrees, and up scaling unnoticeable to 4K. Hahaha wow.
If we get widespread adoption of DLSS next gen, it can have an even bigger effect in allowing the lower end RTX cards to keep up wonderfully in 1080p TVs for the entire generation.
 

Alucardx23

Member
Nov 8, 2017
4,719
AMD has not yet announced any hardware accelerated deep learning if i understood correct. So unless they keep it secret the earliest we're gonna get it on consoles would be the mid gen refresh (if there is one).

It has been confirmed that the XSX be able to accelerate machine learning code.

The DirectML support for machine learning is something that could also prove interesting for developers and ultimately gamers. The 12 TFLOPS of FP32 compute can become 24 TFLOPS of FP16 thanks to the Rapid-Pack Math feature AMD launched with its Vega-based GPUs, but machine learning often works with even lower precision than that. With some extra work on the shaders in the RDNA2 GPU, it is able to achieve 49 TOPS at I8, 8-bit integer precision, and 97 TOPS of I4 precision. Microsoft states applications of DirectML can include making NPCs more intelligent, making animations more lifelike, and improving visual quality.




www.overclock3d.net

Microsoft's DirectML is the next-generation game-changer that nobody's talking about

AI has the power to revolutionise gaming, and DirectML is how Microsoft plans to exploit it
 

Lukas Taves

Banned
Oct 28, 2017
5,713
Brazil
This makes Lockhart such a ridiculously good proposition from Ms.

I can see the ads already. Budget 4k next gen gaming for just 299
 

Stacey

Banned
Feb 8, 2020
4,610
Being able to play Control maxed out on a 2060 at 1080p/60fps with an improved image quality is just......wow.
 

Zedark

Member
Oct 25, 2017
14,719
The Netherlands
Switch 2 might not even need much more power to take current games from 1080p to 4k DLSS performance mode. Perhaps a better CPU would be nice though.
Yeah, if you can do DLSS 2.0 (or 3.0 by that time), then rendering at a native 720k-1080p for an 1440p-4k DLSS upscaled image should be great for docked mode already, and should allow for a 2.5 TFLOPS-ish docked mode, which seems quite possible. Handheld can go down to 360p-540p DLSS'ed to 720p-1080p for a crisp current gen image as well. The GPU should therefore be the easiest part to get up to snuff for next gen ports. CPU shouldn't be a problem, either, I think, since the CPU tech in mobile chips has advanced very rapidly, and the A77 or A78 should provide a bigger jump from A57 than the Zen 2 cores do from the Jaguar (NVIDIA are already using the A78 for their Orin automotive chip, so it's the most likely candidate for a Switch successor as well imo). Same with flash memory: several vendors already offer mobile flash storage with over 1 GB/s read speed, which should be enough for 1080p assets (as opposed to XSX's 2.4 GB/s for 4K textures). It's memory bandwidth and cartridge speed/cost that remain the biggest potential bottlenecks.

This makes Lockhart such a ridiculously good proposition from Ms.

I can see the ads already. Budget 4k next gen gaming for just 299
MS/Sony consoles are AMD, and this is NVIDIA technology (and powered by NVIDIA-specific hardware, namely the tensor cores). We don't know for sure yet, but the consoles quite possibly do not have the hardware needed to provide the boosts discussed for this technology (or actually for a similar, non-NVIDIA technology that AMD has yet to develop/announce).
 

MasterC12TF

Alt Account
Banned
Mar 31, 2020
78
It's still uncertain whether next gen consoles will support hardware accelerated deep learning, I believe, so I don't think we can assume either XSX or PS5 will use a technology similar to this (nevermind the fact that AMD haven't announced anything like this yet I believe).

the learning part is not done by the GPU/ device running the game. But still it will be nice to see. A 400% difference in resolution appears visually negligible so it would be best spend somewhere. I imagine high end GPUs having volumetric lighting everywhere, perfect motion blur, DoF, some RT mixed in as well. Overall it will be great if devs target high end GPUs from a reconstruction perspective
 

Xando

Member
Oct 28, 2017
27,709
It has been confirmed that the XSX be able to accelerate machine learning code.

The DirectML support for machine learning is something that could also prove interesting for developers and ultimately gamers. The 12 TFLOPS of FP32 compute can become 24 TFLOPS of FP16 thanks to the Rapid-Pack Math feature AMD launched with its Vega-based GPUs, but machine learning often works with even lower precision than that. With some extra work on the shaders in the RDNA2 GPU, it is able to achieve 49 TOPS at I8, 8-bit integer precision, and 97 TOPS of I4 precision. Microsoft states applications of DirectML can include making NPCs more intelligent, making animations more lifelike, and improving visual quality.

Until there are any games that actually demo this tech (like control here for example) i remain very sceptical. Also this doesn't seem to be dedicated hardware like Nvidia Tensor cores?

It doesn't have this.

It might or might not have some sort of different solution which has never been demoed before.
 
Last edited:

Zojirushi

Member
Oct 26, 2017
3,322
Is there a hardware reason for Nvidia to keep this locked to 20XX series cards? Or is it just a feature for sales reasons?
 

Zedark

Member
Oct 25, 2017
14,719
The Netherlands
Microsoft has been experimenting with ML upscaling throught Direct ML in the context of Direct X 12 Ultimate. It will obviously be running on RDNA 2. So it might become a standard Direct X 12 U feature.
Yeah, as far as software goes, that's not the biggest question. A bigger question is if RDNA 2 holds any dedicated hardware for performing the deep learning algorithm, otherwise no boosts will be had from this tech.
 

Lom1lo

Member
Oct 27, 2017
1,465
It has been confirmed that the XSX be able to accelerate machine learning code.

The DirectML support for machine learning is something that could also prove interesting for developers and ultimately gamers. The 12 TFLOPS of FP32 compute can become 24 TFLOPS of FP16 thanks to the Rapid-Pack Math feature AMD launched with its Vega-based GPUs, but machine learning often works with even lower precision than that. With some extra work on the shaders in the RDNA2 GPU, it is able to achieve 49 TOPS at I8, 8-bit integer precision, and 97 TOPS of I4 precision. Microsoft states applications of DirectML can include making NPCs more intelligent, making animations more lifelike, and improving visual quality.

I guess this won't be as good as nvidias approach. This is straight up magic, it makes raytracing really accessible. If this is getting really adopted by all games, I think nvidia will have a big edge vs consoles. I pray that this will be in the next switch
 

modiz

Member
Oct 8, 2018
18,022
It has been confirmed that the XSX be able to accelerate machine learning code.

The DirectML support for machine learning is something that could also prove interesting for developers and ultimately gamers. The 12 TFLOPS of FP32 compute can become 24 TFLOPS of FP16 thanks to the Rapid-Pack Math feature AMD launched with its Vega-based GPUs, but machine learning often works with even lower precision than that. With some extra work on the shaders in the RDNA2 GPU, it is able to achieve 49 TOPS at I8, 8-bit integer precision, and 97 TOPS of I4 precision. Microsoft states applications of DirectML can include making NPCs more intelligent, making animations more lifelike, and improving visual quality.




www.overclock3d.net

Microsoft's DirectML is the next-generation game-changer that nobody's talking about

AI has the power to revolutionise gaming, and DirectML is how Microsoft plans to exploit it
I am pretty sure that this isnt the same thing as the machine learning acceleration cores NVidia has.
 

Zedark

Member
Oct 25, 2017
14,719
The Netherlands
Is there a hardware reason for Nvidia to keep this locked to 20XX series cards? Or is it just a feature for sales reasons?
The actual algorithm is intense: it's an AI algorithm comparing a native res picture with higher res primitive in order to deduce what the image would look like in higher res. In order to get any performance boosts from it, you need dedicated hardware that can do it away from the main GPU hardware (the ALUs). The RTX cards have this special hardware (they are called tensor cores), whereas the GPU generations before it do not have that hardware. So, any card before the RTX series won't see any benefits from using this technology (I'd even wager a guess it would slow them down if anything).

Edit: From NVIDIA's website:
NVIDIA said:
Deep learning-based super resolution learns from tens of thousands of beautifully rendered sequences of images, rendered offline in a supercomputer at very low frame rates and 64 samples per pixel. Deep neural networks are then trained to recognize what beautiful images look like. Then these networks reconstruct them from lower-resolution, lower sample count images. The neural networks integrate incomplete information from lower resolution frames to create a smooth, sharp video, without ringing, or temporal artifacts like twinkling and ghosting.
 
Last edited:

Deleted member 5028

User requested account closure
Banned
Oct 25, 2017
9,724
With techniques like this becoming standard for the next generation, it looks like the only distinguishable difference between platforms will be in loading times. It will make for pretty boring DF versus video's in the future :(

IMO natively rendering 4K is not a very creative way to spend any excess TFs
Nah. When you have publishers not putting out X enhanced versions of current gen games there'll always be a game that doesn't take advantage. See: RE3