• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.

UltraMagnus

Banned
Oct 27, 2017
15,670
Weird post when the Switch already has vastly better support than previous Nintendo Plattforms because of the technology used, engines supported and tools provided by nVidia. If they continue on that road the dev-support will adapt accordingly.

It might never reach parity with the more powerful plattforms in terms of AAA releases, but thats pretty much it.

I think Switch 2 is going to be a bit of a reckoning for Japanese developers. Franchises like the proper Final Fantasy games, Resident Evil, Monster Hunter, and Kingdom Hearts should be on the market leading platform in Japan.

Japanese devs I think will settle into a routine of PS5 + Switch 2 being the focus platforms with XBSX and PC also being in the cards.

Whatever they can pick up from Western devs is somewhat gravy I think, but Call of Duty and Madden NFL would be nice. If they can keep The Witcher 4 and Elder Scrolls VI and next gen Overwatch, that would be great.

DLSS could be a really big factor in making ports easier.

Switch 1 I can give some devs a pass, Nintendo looked dead in the water after Wii U and 3DS even was floundering with smartphone games seemingly eating into large chunks of its market.

But Switch 2 ... there's every reason to believe it could sell just as many units as the Playstation 5 can and also outsell the PS5 in Japan on top of that. I suspect Microsoft will probably eat a bit into Sony's marketshare next gen because they look a lot sharper this time around.
 

cw_sasuke

Member
Oct 27, 2017
26,345
I think Switch 2 is going to be a bit of a reckoning for Japanese developers. Franchises like the proper Final Fantasy games, Resident Evil, Monster Hunter, and Kingdom Hearts should be on the market leading platform in Japan.

Japanese devs I think will settle into a routine of PS5 + Switch 2 being the focus platforms with XBSX and PC also being in the cards.

Whatever they can pick up from Western devs is somewhat gravy I think, but Call of Duty and Madden NFL would be nice. If they can keep The Witcher 4 and Elder Scrolls VI and next gen Overwatch, that would be great.

DLSS could be a really big factor in making ports easier.

Switch 1 I can give some devs a pass, Nintendo looked dead in the water after Wii U and 3DS even was floundering with smartphone games seemingly eating into large chunks of its market.

But Switch 2 ... there's every reason to believe it could sell just as many units as the Playstation 5 can and also outsell the PS5 in Japan on top of that. I suspect Microsoft will probably eat a bit into Sony's marketshare next gen because they look a lot sharper this time around.
I dont know if i would go that far - all im saying is better 3rdParty support isnt off the table if the hardware is powerful and viable enough. The Switch proved that and im sure Nintendo and nVidia are gonna continue that route.

Japanese support is weird - many of these companys will play favorites even if they could implement multiplattforms strategies right now with many of their titles, so i dont expect too much there compared to what we have now. The western market has much more potential, because Sony or MS wouldnt be able to keep software away from a Nintendo plattform if its deemed viable for a publisher.

They need to continue and improve on what they started with the Switch - a Switch 2 should get Cyberpunk, Diablo 4 and current service games like Fortnite/Apex in their latest iteration.

Looking forward to what N and N are cooking up for the Hybrid future.
 

Mr Swine

The Fallen
Oct 26, 2017
6,033
Sweden
I think other methods were better than DLSS 1.0 ... but 2.0 is a massive leap forward, and who knows what 3.0 will bring.

I wonder if you can go even lower than 540p honestly and still get a fairly nice image.

I suspect that DLSS 3.0 would probably be RTX 30XX exclusive and that DLSS can be used in all games using Vulcan and DX12. It won't be as good as a game built around it but better than nothing I guess?
 

UltraMagnus

Banned
Oct 27, 2017
15,670
I dont know if i would go that far - all im saying is better 3rdParty support isnt off the table if the hardware is powerful and viable enough. The Switch proved that and im sure Nintendo and nVidia are gonna continue that route.

Japanese support is weird - many of these companys will play favorites even if they could implement multiplattforms strategies right now with many of their titles, so i dont expect too much there compared to what we have now. The western market has much more potential, because Sony or MS wouldnt be able to keep software away from a Nintendo plattform if its deemed viable for a publisher.

They need to continue and improve on what they started with the Switch - a Switch 2 should get Cyberpunk, Diablo 4 and current service games like Fortnite/Apex in their latest iteration.

Looking forward to what N and N are cooking up for the Hybrid future.

I think even the Japanese market will crack. I think every Japanese publisher is looking at current Switch sales and Animal Crossing going supernova going "holy shit! This thing is selling big in the West too". Also secondary to that I think the PS4's kinda last ditch effort to revive the floundering home console business in Japan (by consolidating all of FF, DQ, KH, and MH on one platform) really was somewhat of a dud. The Japanese consumer has kinda spoken and said they don't want a stationary big box home console any longer.

This is going to be the first generation ever in Japan I think where Nintendo basically does a clean sweep of having all top 5+ of the top selling Japanese software titles, not even the Famicom era or DS era saw that. All the biggest selling Japanese console games are now Nintendo titles ... Animal Crossing, Pokemon, Splatoon, Smash Brothers, Mario Kart, even a Zelda is now selling on par with the biggest Japanese 3rd party games in Japan which is insane.
 

TronLight

Member
Jun 17, 2018
2,457
I suspect that DLSS 3.0 would probably be RTX 30XX exclusive and that DLSS can be used in all games using Vulcan and DX12. It won't be as good as a game built around it but better than nothing I guess?
The way DLSS works, you can't just force it on a game like any shader. It needs training for the neural network, targeted to the game. It's not a blanket solution like something like checkerboard rendering kind of is.
 

Dictator

Digital Foundry
Verified
Oct 26, 2017
4,930
Berlin, 'SCHLAND
The way DLSS works, you can't just force it on a game like any shader. It needs training for the neural network, targeted to the game. It's not a blanket solution like something like checkerboard rendering kind of is.
It is actually exactly as blanket of a solution as Checkerboard rendering is in DLSS 2.0 - as it ties into the same position of where TAA is in the render pipeline, and does not require per-game neural network training anymore. It is a generic inslot, that requires a developer to feed it things like depth buffer, motion vectors, exposure... just like TAA. It is less intensive to implement in game than checkerboard rendering as you are still working within aspect ratio of the final frame (among other concerns).
This clearly isn't universal, though. Not all reconstruction methods are the same quality, and some aren't anywhere near so obvious. For example, your colleagues at Digital Foundry didn't detect reconstruction in DMC 5 until taking a third look at the game. And that's while doing tech analysis, not just playing. For multiple games, even when able to detect reconstruction DF will note that it's "very similar" or "almost unnoticeable", etc. compared to native.
Just because my colleagues did something in a previous video does not reflect onto me and my appraisal as things are now. I am not my colleague, and in general, opinions can likewise change over time. I am pretty sure the general opinion of DF's regarding TAA in its first ghosty, blurry, and overly soft presentations like we saw with HRAA changed dramatically over time when you look at state of the art TAA in something like Call of Duty.
The universal opinion of Rich and myself (have not asked john what he thinks of DLSS 2.0) is indeed that we think it is the best reconstruction we have seen. In terms of artefacts it leaves, its cost (the ms cost is very low), and the amount of rendering time it saves vs. an image of similar quality.

This is a slight tweak I'd recommend based on how you've presented your several videos on DLSS. In your narration for both Wolfenstein and Control, you've been clear that "less detail" in native rendering versus DLSS is due primarily to TAA. But there's no footage showing the non-TAA results you discuss above. I believe lack of direct context like this has mislead some people's evaluation of the tech.
The thing to beat/approximate better is the TAA presentation at native. In games like control or Wolf Youngblood, the game's presentation literally breaks if you turn off temporal anti-aliasing. Hair no longer renders correctly (in idtech and north light), ambient occlusion no longer renders correctly (in idtech and northlight), SSR no longer renders correctly (in idtech and northlight), or even the intended denoise for RT is now incorrect (most devs have temporal spatial denoisers and then allow TAA to clean up the results from that as well: this happens both in idtech and northlight). Control does not even offer the in menu ability to turn off TAA - and idtech has switched over to also not allowing you to turn of TSSAA because it breaks the game's rendering. TAA cleans up many stocastic elements that game's now use or how the amortise costs over frames. This is applicable to all games released so far with DLSS.
DLSS has motion artifacts, ghosting, and shimmering, even in 2.0. All are visible in your video, but you don't consistently call attention to them. This is because they're mostly only visible when zoomed, which you do explicitly point out several times.
DLSS is essentially temporal reconstruction (which is a modification of native res taa) and is (this is very important) no longer tied into errors of reprojection (that is what it is doing differently, changing how reprojection and rejection is manage) - there is no ghosting. This is a pretty big deal as every TAA and temporal technique (most checkerboards) actually still ghosts. DLSS does not ghost. This is ghosting (samuel hayden's arms):


or this is ghosting:
8NYiIGS.jpg

Here a comparison of DLSS 2.0 vs. native resolution TAA at the same frame of a very small animation of jesse rocking back and forth:
comprealrckf4.png



But as stated above, such problems with other reconstruction methods--or some render issues even when native--also require zooming to see. All DF's comparisons rely to some extent on closeups for definitive analysis. Should we ignore all those other findings as not really meaningful, as you repeatedly encourage here? Or should we accept that with complicated imagery, close examination is necessary, and results from it shouldn't be minimized or disregarded?
In my video I say readily that the reconstruction at 1080p with DLSS is obvious at normal screen distance due to sharpening.

For 2160p reconstruction from 1080p? That requires a 400% zoom in to find the artefacts in my experience. 2160p reconstruction from 1440p? That one is really only visible on text panels that are far into the distance, so something like an 800% zoom in.

And I think this is a wholly unique situation for 4K reconstruction (especially at something rendering at 1/4 the pixels internally). As even the best reconstructions I have seen look categorically not like 4K native presentations. Temporal accumulation reconstructions in my experience start not looking like native 4K in still shots (so not even movement), when the internal scaling factor is dropped below 0.8 (so much higher res than 1440p or 1080p). My favourite reconstruction technique before DLSS was actually HZDs and Death Stranding's (1920X2160): even there developers there in their presntation on it say it has a different look than 4K in the raw non-zoomed in image. It cannot preserve single pixel detail like 4K can represents. This has a macro effect on the image of maintaining great sillhouettes with its most excellent anti-aliasing, but forgoing on detail inside the surface.
 
Last edited:

TronLight

Member
Jun 17, 2018
2,457
It is actually exactly as blanket of a solution as Checkerboard rendering is in DLSS 2.0 - as it ties into the same position of where TAA is in the render pipeline, and does not require per-game neural network training anymore. It is a generic inslot, that requires a developer to feed it things like depth buffer, motion vectors, exposure... just like TAA. It is less intensive to implement in game than checkerboard rendering as you are still working within aspect ratio of the final frame (among other concerns).
I missed this. That's insane.
 
OP
OP
ILikeFeet

ILikeFeet

DF Deet Master
Banned
Oct 25, 2017
61,987
Dictator given that per-training isn't necessary anymore, am I correct in assuming that training with high resolution ground truth images of the same game yields more accurate results?

Given this image, I figured it may be true, but only really applicable if the developer has the time to do so

nvidia-dlss-2-0-architecture.png
 

Vash63

Member
Oct 28, 2017
1,681
It is actually exactly as blanket of a solution as Checkerboard rendering is in DLSS 2.0 - as it ties into the same position of where TAA is in the render pipeline, and does not require per-game neural network training anymore. It is a generic inslot, that requires a developer to feed it things like depth buffer, motion vectors, exposure... just like TAA. It is less intensive to implement in game than checkerboard rendering as you are still working within aspect ratio of the final frame (among other concerns).

Just because my colleagues did something in a previous video does not reflect onto me and my appraisal as things are now. I am not my colleague, and in general, opinions can likewise change over time. I am pretty sure the general opinion of DF's regarding TAA in its first ghosty, blurry, and overly soft presentations like we saw with HRAA changed dramatically over time when you look at state of the art TAA in something like Call of Duty.
The universal opinion of Rich and myself (have not asked john what he thinks of DLSS 2.0) is indeed that we think it is the best reconstruction we have seen. In terms of artefacts it leaves, its cost (the ms cost is very low), and the amount of rendering time it saves vs. an image of similar quality.


The thing to beat/approximate better is the TAA presentation at native. In games like control or Wolf Youngblood, the game's presentation literally breaks if you turn off temporal anti-aliasing. Hair no longer renders correctly (in idtech and north light), ambient occlusion no longer renders correctly (in idtech and northlight), SSR no longer renders correctly (in idtech and northlight), or even the intended denoise for RT is now incorrect (most devs have temporal spatial denoisers and then allow TAA to clean up the results from that as well: this happens both in idtech and northlight). Control does not even offer the in menu ability to turn off TAA - and idtech has switched over to also not allowing you to turn of TSSAA because it breaks the game's rendering. TAA cleans up many stocastic elements that game's now use or how the amortise costs over frames. This is applicable to all games released so far with DLSS.

DLSS is essentially temporal reconstruction (which is a modification of native res taa) and is (this is very important) no longer tied into errors of reprojection (that is what it is doing differently, changing how reprojection and rejection is manage) - there is no ghosting. This is a pretty big deal as every TAA and temporal technique (most checkerboards) actually still ghosts. DLSS does not ghost. This is ghosting (samuel hayden's arms):


or this is ghosting:
8NYiIGS.jpg

Here a comparison of DLSS 2.0 vs. native resolution TAA at the same frame of a very small animation of jesse rocking back and forth:
comprealrckf4.png




In my video I say readily that the reconstruction at 1080p with DLSS is obvious at normal screen distance due to sharpening.

For 2160p reconstruction from 1080p? That requires a 400% zoom in to find the artefacts in my experience. 2160p reconstruction from 1440p? That one is really only visible on text panels that are far into the distance, so something like an 800% zoom in.

And I think this is a wholly unique situation for 4K reconstruction (especially at something rendering at 1/4 the pixels internally). As even the best reconstructions I have seen look categorically not like 4K native presentations. Temporal accumulation reconstructions in my experience start not looking like native 4K in still shots (so not even movement), when the internal scaling factor is dropped below 0.8 (so much higher res than 1440p or 1080p). My favourite reconstruction technique before DLSS was actually HZDs and Death Stranding's (1920X2160): even there developers there in their presntation on it say it has a different look than 4K in the raw non-zoomed in image. It cannot preserve single pixel detail like 4K can represents. This has a macro effect on the image of maintaining great sillhouettes with its most excellent anti-aliasing, but forgoing on detail inside the surface.


For me, the sharpening halos are by far the worst thing about DLSS. I have a 2080 and it was super apparent in older games like Exodus, and I'm really disappointed that it's still present in 2.0 with zero option to turn it off. I'm even more disappointed by the fact that according to your video, Nvidia does make it tunable and it's developers that have decided to force it on. As an example, I thought the original DLSS implementation in the launch day version of Exodus, while more blurry, was less compromised as it didn't have the oversharpening artifacts and ringing. I ended up just disabling it and suffering the framerate hit.

Have you heard anything from developers on why they force such a thing?
 

Dennis8K

Banned
Oct 25, 2017
20,161
Can DLSS 2.0 do 8K reconstruction from 4K?

For the upcoming 3080 series of NVIDIA GPUs I was considering getting an 8K monitor and was wondering if gaming at 8K might be feasible with DLSS 2.0
 

ppn7

Member
May 4, 2019
740
Enjoy your future 8K TV but only in few years maybe when you get a decent GPU for 4K
 

Vimto

Member
Oct 29, 2017
3,714
For me, the sharpening halos are by far the worst thing about DLSS. I have a 2080 and it was super apparent in older games like Exodus, and I'm really disappointed that it's still present in 2.0 with zero option to turn it off. I'm even more disappointed by the fact that according to your video, Nvidia does make it tunable and it's developers that have decided to force it on. As an example, I thought the original DLSS implementation in the launch day version of Exodus, while more blurry, was less compromised as it didn't have the oversharpening artifacts and ringing. I ended up just disabling it and suffering the framerate hit.

Have you heard anything from developers on why they force such a thing?
What res are u playing on, and are the halo effects noticeable on higher res (4k)?
 

ArchedThunder

Uncle Beerus
Member
Oct 25, 2017
19,000

Machine learning based upscaling is probably going to be the way console games achieve 8K in 4 years with pro consoles(if they do them) and in 7 years with the following generation. It's nice to know devs won't have to waste an insane amount of resources for 8K.
I can only imagine that in 4-7 years machine learning based upscaling from 4K to 8K is going to look virtually identical to native 8K, though I guess Sony and MS might have to build some specific hardware into the consoles to do it.
 

Galava

▲ Legend ▲
Member
Oct 27, 2017
5,080
SO basically, once DLSS 2.0 becomes the "norm" the way TAA became, we will only need a GPU capable of pushing half the pixels of our monitor, or have prettier pixels :)

Now seriiously, this is actually huge seeing how memory limited and memory-bandwidth limited we became in the past few years with hgih resolutions and high quality textures...
 

Deleted member 46489

User requested account closure
Banned
Aug 7, 2018
1,979
I can't even begin to imagine what this will do to gaming laptops. Think of 1 kg (2.2 pounds) laptops with 10+ hours of battery life and phenomenal performance in raytraced modern games. With a 540p to 1080p reconstruction, this is an actual possibility.
 

Deleted member 11276

Account closed at user request
Banned
Oct 27, 2017
3,223
prestazioni-control-83534.768x432.jpg


26fps Max out with RT. Not sure people want to get a 8K over their 4K to play with less details and a fortiori without RT.
One title out of many and with Raytracing.

You would be able to reach 8K30 with DLSS Performance, volumetrics medium and maybe one or two RT effects turned off.

If the performance cost of DLSS stays at 10% though. Could be way more work to reconstruct from 4K to 8K due to much higher pixel density, IDK.
 

Vimto

Member
Oct 29, 2017
3,714
Even with conservative %100 extra perf on RT and %50 extra perf on Rasterization.. Ampere will be a beast.

Add to it Dlss2.0 ? We have comfortable 4K 100+fps with RT on high end cards
 

doodlebob

Member
Mar 11, 2018
1,401
This probably only really works with photoreal games because of the training set being high quality real life images. I don't see how it's feasible for the vast majority of Nintendo's games then.
 
OP
OP
ILikeFeet

ILikeFeet

DF Deet Master
Banned
Oct 25, 2017
61,987
This probably only really works with photoreal games because of the training set being high quality real life images. I don't see how it's feasible for the vast majority of Nintendo's games then.
Yoshi can is sub-540p in order to hit 60fps. it could definitely use it. as could Kirby if they they want to target 60fps again. and that's even ignoring if they add ray tracing. if anything DLSS might work out better for cartoony games because micro-details are usually minimal. similar to why Waifu2X works better with anime art rather than, say, a Renaissance painting. the solid colors and higher contrast edges would work to its favor when upscaling
 

ppn7

Member
May 4, 2019
740
One title out of many and with Raytracing.

You would be able to reach 8K30 with DLSS Performance, volumetrics medium and maybe one or two RT effects turned off.

If the performance cost of DLSS stays at 10% though. Could be way more work to reconstruct from 4K to 8K due to much higher pixel density, IDK.

Yeah of course just one game with RT and a 2020 games. And there will be others games much more greedy.
As I said I don't think people would want to exchange 4K with RT for 8K without.
And surely I don't talk about the fact you can use DLSS in some cases s

for a first generation RT card, that's fine. and considering Ampere is gonna be a jump over that, I think my point stands

I agree with you just the fact people target 60fps, and they don't to go from 4K 60fps to 8K 30fps
 

Galava

▲ Legend ▲
Member
Oct 27, 2017
5,080
The goal these next years should be 4K DLSS RT at Max Settings. Maybe some will experiment with 8K30, but that's still too expensive rn.

I was always reluctant to upgrading to 4K just because of all the costs that go with it (having to buy a top of the line GPU so I can play those games at a proper framerate), but DLSS makes it easier to achieve that.
 

Liabe Brave

Professionally Enhanced
Member
Oct 27, 2017
1,672
What reconstruction methods are you suggesting are better than DLSS 2.0?
The Decima version of CBR produces a vastly cleaner image than what we see here. Not quite as clean but I'd still say superior are temporal reconstruction in the RE Engine, Insomniac's, and the IW Engine. In the next tier, I'd put the Foundation Engine, SSM's, Frostbite, and AnvilNext--all capable of exceeding DLSS, but sometimes just trading blows. Next step down, more akin to DLSS and sometimes worse would be Snowdrop, Luminous Studio, Unreal, etc. The worst rconstruction approaches I've seen are from RAGE and Disrupt, as well as some smaller Japanese devs. (Size isn't necessarily the cause, though; the Housemarque and Second Order proprietary reconstruction methods are pretty darn good.)

Implementation matters, of course; engines are flexible to multiple technical approaches and disparate art direction, so any of the above can be better or worse in certain instances. There may also be individual tradeoffs on certain aspects (Decima gives up some sharpness to gain more accuracy, etc.). And to be clear, no reconstruction actually matches native rendering. (Basic CBR in theory can, but in reality the more precise you become to approach that goal, the more artifacts tend to appear.)

As for it not being better that other reconstruction methods...Huh? Are you telling me this checkerboard rendering which produces more artifacts is superior to DLSS?
CBR does not inherently produce more artifacts than DLSS. There are worst-case scenarios where it does, but based on the footage in Mr. Battaglia's tests DLSS 2.0 creates about as many artifacts as average CBR. It produces more than the best CBR. And there are non-CBR methods which produce about the same amount of artifacts, but they're less noticeable due to their nature.

Which reconstruction method looks and performs better than DLSS?
Performance is really where DLSS shines, because you get to drop 75% of your rendering budget to make up for the reconstruction cost. Which is also partially shared by dedicated hardware, as well. (A DLSS equivalent technique wouldn't provide as much headroom on GPUs without tensor cores.) All that said, I think maybe you could make a case for a "geometry render" from 1080p to 4K, followed by a sharpening filter. This would definitely be far cheaper performance-wise than DLSS. And while it'd also have much less interior detail, there are probably specific artstyles where that wouldn't be immediately visible (especially if you were carefully maximizing texel density at the lower res).

This is just conjecture and I could be wrong. It may be important to be clear that I'm not saying that DLSS is bad, just that it isn't "magic" and it still has obvious tradeoffs. That isn't a demerit! It's typical for rendering techniques. We have very, very cheap methods to add pixels, but they often don't look good. There's also relatively expensive reconstruction that looks very good indeed. And then multiple methods tweaked for other balances of accuracy and overhead. DLSS's particular set of tradeoffs occupies a new zone in possibility space: quality that almost matches the high end methods, with a cost about 60% less (provided you have the right hardware!). As such, it will no doubt be useful for specific applications.
 

ppn7

Member
May 4, 2019
740
The main issue with CBR, it has to be done for each game, each engine. You need more time to get a good result.
DLSS 2.0 you put the SDK inside your game engine. And it's done.

Less time, less work. Less cost
 

Pargon

Member
Oct 27, 2017
11,994
The Decima version of CBR produces a vastly cleaner image than what we see here.
I'll reserve judgement until it's on PC, but everything I've seen from those games is very soft.
Not quite as clean but I'd still say superior are temporal reconstruction in the RE Engine […]
This stands out and frankly discredits your post.
RE Engine, in its latest iteration (RE3) still has significant ghosting, along with very noticeable aliasing/flickering with its TAA and reconstruction methods.
Image quality is arguably better when using resolution scaling than reconstruction if you pick settings of equivalent performance. And its image scaling certainly doesn't compare to DLSS.

It seems like there must be one specific thing that you really dislike about DLSS which is making you rank it lower than so many others.
And most of the ones listed don't even have a native or supersampled reference for comparison.
 
OP
OP
ILikeFeet

ILikeFeet

DF Deet Master
Banned
Oct 25, 2017
61,987
all those reconstruction methods are nice but they don't bring such a dramatic improvement in performance, while DLSS, at least, gets close to their quality while boosting framerate significantly.

I guess if absolute IQ is all you want, then I guess it's easy not to put DLSS at the top. but DLSS is all about trying to get native-res-like IQ but with frame rates of lower resolution
 

tuxfool

Member
Oct 25, 2017
5,858
all those reconstruction methods are nice but they don't bring such a dramatic improvement in performance, while DLSS, at least, gets close to their quality while boosting framerate significantly.

I guess if absolute IQ is all you want, then I guess it's easy not to put DLSS at the top. but DLSS is all about trying to get native-res-like IQ but with frame rates of lower resolution
I very much doubt that those other techniques would have anywhere the success of upscaling a 540p image. As one of the authors notes, that is the stress test of this particular algorithm.

The inherent softness and ghosting of other techniques seems particularly noticeable. Softness isn't necessarily a problem, but their success/acceptability is very much a function of a higher baseline resolution.
 

dgrdsv

Member
Oct 25, 2017
11,846
I'll reserve judgement until it's on PC
I'll be very surprised if Decima's CBR will make it to PC. It is highly likely to be hand tailored to function on PS4Pro's GCN GPU only and while they could probably make it work on PC Polaris+ AMD parts it'll highly likely be a no go on any NV GPU.
 

Pargon

Member
Oct 27, 2017
11,994
I'll be very surprised if Decima's CBR will make it to PC. It is highly likely to be hand tailored to function on PS4Pro's GCN GPU only and while they could probably make it work on PC Polaris+ AMD parts it'll highly likely be a no go on any NV GPU.
Perhaps, but native 4K could still be compared against PS4 Pro captures to judge how effective it is.
And maybe they will implement DLSS 2.0 in the PC version.
 

Maple

Member
Oct 27, 2017
11,720
Maybe I'm misunderstanding this tech, but it seems like it makes it somewhat realistic that we'll be getting a Switch 2 capable of 4K when docked.

And even around 1440p in portable mode if Nintendo opts for a higher resolution display.
 
OP
OP
ILikeFeet

ILikeFeet

DF Deet Master
Banned
Oct 25, 2017
61,987
I very much doubt that those other techniques would have anywhere the success of upscaling a 540p image. As one of the authors notes, that is the stress test of this particular algorithm.

The inherent softness and ghosting of other techniques seems particularly noticeable. Softness isn't necessarily a problem, but their success/acceptability is very much a function of a higher baseline resolution.
honestly, I'm just trying to leave myself some wiggle room, especially since I don't play many other games that utilize some form of reconstruction
 

Fafalada

Member
Oct 27, 2017
3,065
even there developers there in their presntation on it say it has a different look than 4K in the raw non-zoomed in image.
That was in comparison to "ground truth" 16x supersampled image, not realtime 4k render.

If we don't move goal posts that way, matching native res in static shots is "default" mode for typical temporal reconstruction methods.
 

Pottuvoi

Member
Oct 28, 2017
3,062
I'll be very surprised if Decima's CBR will make it to PC. It is highly likely to be hand tailored to function on PS4Pro's GCN GPU only and while they could probably make it work on PC Polaris+ AMD parts it'll highly likely be a no go on any NV GPU.
I wonder if they changed it for Death Stranding as they didn't use id buffer from Ps4Pro for Horizons, they did most likely use couple of features which help sampling with MSAA and such though.
All these can be done in shaders if needed. (Perhaps best resource is the Dark Souls CBR presentation.)

I wonder if work amount to test it on all machines is too much though.
 
Jun 2, 2019
4,947
Maybe I'm misunderstanding this tech, but it seems like it makes it somewhat realistic that we'll be getting a Switch 2 capable of 4K when docked.

And even around 1440p in portable mode if Nintendo opts for a higher resolution display.

Nintendo is not going for such a screen since it would be a battery hog. 1080p max

Still, it's exciting, Nintendo going with Nvidia and DLSS being a resource saving technique basically guarantees Switch 2 (or whatever is called) being a particularly special machine. We may even see the first raytracing tablet-like machine in a couple of years (that last one os a fantasy of mine. I just LOVE Raytracing)
 
OP
OP
ILikeFeet

ILikeFeet

DF Deet Master
Banned
Oct 25, 2017
61,987
Nintendo is not going for such a screen since it would be a battery hog. 1080p max

Still, it's exciting, Nintendo going with Nvidia and DLSS being a resource saving technique basically guarantees Switch 2 (or whatever is called) being a particularly special machine. We may even see the first raytracing tablet-like machine in a couple of years (that last one os a fantasy of mine. I just LOVE Raytracing)
Imagination has demoed RT on mobile hardware before, so the Switch 2 wouldn't be the first (except maybe the first product to release with HW acceleration)

 
Last edited:
Jun 2, 2019
4,947
Imagination has demoed RT on mobile hardware before, so the Switch 2 would be the first (except maybe the first product to release with HW acceleration)



You're right, i forgot this. Still, any hardware can do Ray/Path tracing, the difference in this case would be that, effectively, it would be Hardware accelerated.

Wich is still cool and would still be a first, almost a parallel to N64, wich was the firs machine of its form factor to implement hw accelerated texture filtering and perspective correction.

We're living exciting times. At times i feel like i got back to teenhood.
 

Vintage

Member
Oct 27, 2017
1,292
Europe
Dumb question, but is DLSS available for rasterization or just ray-tracing? I just remember playing Metro Exodus and the only way to enable it was to use ray-tracing.
 

MrKlaw

Member
Oct 25, 2017
33,038
I'd like to understand more about the relative costs in ms for each of the main reconstruction techniques. Especially when we move to more performant next gen systems that overhead as a percentage of overall budget per frame will be even lower and give enough headroom to further evolve the techniques used.