no, only RTX cards
no, only RTX cards
DirectML will never be as good as DLSS because it's not hardware accelerated. DLSS uses the AI/Tensor cores that are implemented in every rtx gpu to accelerate the neural network that will generate a high resolution image from a lower one. It's very expensive to accelerate neural network on gpu compute alone. I know because I worked with image processing with neural networks. It takes a long time to get a neural to generate an image after training. Doing dozens of images every second requires a lot of resources. Enough that it would probably negate any performance advantage of such method. Which is why I am skeptical of amd next gpu having any similar method without having the extra hardware to do it. We might have to wait a generation of gpu to see it. There is no way I am buying any amd gpu because of this. I think anyone who bought the 5700/xt were screwed over because of reviewers and YouTubers downplaying the extra hardware Nvidia integrated into their gpus.
DirectML – Xbox Series X supports Machine Learning for games with DirectML, a component of DirectX. DirectML leverages unprecedented hardware performance in a console, benefiting from over 24 TFLOPS of 16-bit float performance and over 97 TOPS (trillion operations per second) of 4-bit integer performance on Xbox Series X. Machine Learning can improve a wide range of areas, such as making NPCs much smarter, providing vastly more lifelike animation, and greatly improving visual quality.
I really don't expect Nintendo to have had the foresight to make sure their next hardware supported DLSS. However, if it does, and it were to regularly receive next-gen ports, I think that would also inevitably lead to much more DLSS support on the PC. As it stands, DLSS only being included in games that have a marketing deal with Nvidia has me doubting it'll see much more support than PhysX got back in the day.
If we're lucky maybe it'll get patched in?A shame that Horizon isn't going to include it, right now the few titles with DLSS 2.0 aren't really much to my liking so I hope to see it become more common in the future.
I really regret buying a laptop with a 1660ti instead of an RTX 2060. It made sense at the time (performance was very similar for much less money) but now that DLSS benefits are starting to become apparent, it's quite clear it was a mistake. :(
I think the next gen version of the GTX 1660 (sub $200) will come out spring 2022. about 1 1/2 years after 3080.
if you're able to sell your 1660 for near MSRP, and have the extra ~$120ish dollars then maybe? You'll see a small 10-15% bump in performance. But I think the bigger noticeable upgrade would be ray tracing with DLSS enabled.
I am with you, and I love AMD as a company so I wish them well. But there are some real concerns here, and again I will explain it via photography analogy, because I am very aware of that industry.I have a 2080 ti so I have no chip on my shoulder but I'm sure amd is working to be competitive here. MS have their own solution as well, no need to be dismissive to AMD's future efforts in this space.
Honestly, I'd ride with the 1660 super until the 3000 series release there 3060 this winter (Nov-March). Or, if you're not noticing any performance bottlenecks waiting a couple years for the 4000 series or see what AMD does.
VR magnifies every single temporal artifact by 10x (at least). And DLSS still has a ton of those - so no - current incarnation would likely look awful.
Well - PCVR has had limited use of foveated/lens-matching thanks to the mess of hw-vendors support (NVidia has like 3 distinct approaches to this now). VRS won't be as effective as some of the others but it's at least relatively non-invasive implementation (unlike others) for this use-case, so maybe it'll finally pick-up in use.But that said Nvidia has VRSS for VR and it works pretty great. It's not the same as DLSS at all but similar in it's goals
Now maybe but we'll see in 3 years. There's also the possibility of eGPU/SCD docked only. DLSS might not make sense on a 1080p screen undocked but might pay dividends on your 4k tv with Switch 2 keeping pace or even exceeding PS5 and XSX IQ.The RTX Turing cards with tensor cores are on a whole other level of power consumption and performance compared to the Tegra mobile chips
Pretty wild that it could actually be true for some games.
DLSS 1.0 needed per game training but the current 2.0 iteration is universal and can be implemented without extra training.The DLSS model needs to be trained on a per game basis so Nvidia needs push driver updates if they added support for specific games.
Well the catch is you have to pay out the ass to get a high end GPU that has DLSS 2.0.
It used to be you could get a GPU that's 2x what you had 2 years ago for $500. Now you have to spend a grand and have a game with DLSS 2.0 support. The 2080 ti is a rip off, Nvidia is selling flagship GPUs at Titan prices
It works on all 2000 series cards. You can buy something like 2060 super for less than $500. Not saying it's cheap but you don't need a grand to utilize DLSS 2.0.
Now maybe but we'll see in 3 years. There's also the possibility of eGPU/SCD docked only. DLSS might not make sense on a 1080p screen undocked but might pay dividends on your 4k tv with Switch 2 keeping pace or even exceeding PS5 and XSX IQ.
I mean to get 2x a 1080 to you'd probably need a 2080 ti with a DLSS enabled game right?
2080ti will give you ~2x over 1080 without DLSS.I mean to get 2x a 1080 to you'd probably need a 2080 ti with a DLSS enabled game right?
2080ti will give you ~2x over 1080 without DLSS.
With DLSS, even 2060S will give you ~2x over 1080.
Second graph shows that 2060S with DLSS performance mode runs at ~95 fps (slightly sub-100) at 4K. Which is twice as fast as 1080 (47 fps, first graph).The first graph shows the 1080 TI running 75% as fast as a card that came in it 2 years later at over twice the price.
I'm not sure how to interpret the 2nd graph without an apples to apples compare.
Second graph shows that 2060S with DLSS performance mode runs at ~95 fps (slightly sub-100) at 4K. Which is twice as fast as 1080 (47 fps, first graph).
with this tech progressing so rapidly from v1 to v2, if the next gen consoles don't have a viable alternative to this, they will be left in the dust pretty quick.
hope thats not the case.
maybe should have waited one more year. seems like they are being released just in cusp of something revolutionary and barely missing it.
Honestly that's almost always the case, and there's always something new on the horizon. Last generation it was variable refresh rates, that started to become a thing in 2013 just as new consoles were coming out.with this tech progressing so rapidly from v1 to v2, if the next gen consoles don't have a viable alternative to this, they will be left in the dust pretty quick.
hope thats not the case.
maybe should have waited one more year. seems like they are being released just in cusp of something revolutionary and barely missing it.
Well, there is always work to be put in, if you want to get the most out of your machine. Or you accept it as it is.Thanks for the reply. Yeah I noticed my CPU throttling down from 3.8ghz to avg 3.3ghz. How does throttlestop work ?
Issue with my l is it's only an inch thick and so it gets hot. Avg CPU temp sticks around 90c at full load. Recon it would be safe to un-throttle it ?
Your laptop sounds a beast as well, what laptop do you have if you have desktop parts in ?
What do you recon the odds are of playing Cyberpunk on fairly good settings with my 2070 max q using DLSS ? I thought it was good until I read the recent play throughs we're on a 2080ti at 1080p WITH DLSS on. Thought at that point my chances of max settings were dashed lol
They already monetize it by charging RTX cards higher than the previous cards.Do people think they're going to try and monetize DLSS through a sub?
Impressions are great. Sounds like it's going to be big and too good to be true.
The difference is that tensor cores are separate, additional hardware. The RPM for AMD runs on the general compute units. Therefore, running a DLSS-alike will use up far more resources that can't be used for the rest of rendering. It's analogous to running raytracing in software as opposed to RT hardware.I'll humbly suggest that someone more in the know about these things should chime in but, RDNA2 has support for double-rate 16-bit float performance and more machine learning precision support at 8bit/4bit in each CU. It's quite litterally doing the same thing as a tensor core.
DLSS has plenty of drawbacks, it generates just as many artifacts and temporal instabilities as other reconstruction techniques. (More than some, less than others.) Its usefulness derives not from absolute quality, but because it achieves that quality while running primarily on otherwise underused tensor cores. It thus has very little impact on general compute, meaning larger performance improvements versus other methods.Previous technologies that improved performance for cheap had lots of catches, like screenshots may look great but in motion the temporal IQ takes a huge hit. DLSS doesn't seem to have as many drawbacks.
No idea why some still compare DLSS to RIS like it's the same thing or like it's something new
People have been using stuff like Reshade for years.
And NV also has image sharpening at the driver level
Do you feel like DLSS 2 and FidelityFX give comparable results here?
Source: https://wccftech.com/death-stranding-and-dlss-2-0-gives-a-serious-boost-all-around/
DLSS 2 is temporal AA/upscaler in it's heart.Can games save space on textures by relying on DLSS? Since DLSS upscales and super samples things, aren't the textures in those games also super sampled and upscaled? If so, couldn't developers save space by storing lower resolution textures and letting DLSS do all the work? I know they probably won't do this since not all hardware supports DLSS, but if say they had only 1 hardware to think about and that hardware has DLSS, would this be a viable way to save space and shrink down the size of games some what?
I think the last time AMD even talked about the subject was when they briefly spoke about DirectML running on the radeon VII. So presumably they would be piggy backing off that.Even if we discount the hardware side, wouldn't a theoretical AMD competitor still be years behind nvidia's tech? They've been working on machine learning techniques for like a decade now
Will this tech apply to VR games in the future? Seems like it'd be perfect for that.
Good to see people finally coming around. When Nvidia announced this tech and their RTX cards, the response was pretty awful.
As I understood it, DLSS runs mostly sequentially at the end of a frame, rather than in parallel - so that might be less of a problem than you think.The difference is that tensor cores are separate, additional hardware. The RPM for AMD runs on the general compute units. Therefore, running a DLSS-alike will use up far more resources that can't be used for the rest of rendering. It's analogous to running raytracing in software as opposed to RT hardware.
It's far higher quality and more temporally-stable than anything I've seen in other games.DLSS has plenty of drawbacks, it generates just as many artifacts and temporal instabilities as other reconstruction techniques. (More than some, less than others.) Its usefulness derives not from absolute quality, but because it achieves that quality while running primarily on otherwise underused tensor cores. It thus has very little impact on general compute, meaning larger performance improvements versus other methods.
You will see in an upcoming video that DLSS 2.0 in a direct comparison with checkerboarding rendering does not produce nearly as many ghosts and instabilities or artefacts, not by a long shot. It in fact crushes checkerboarding rendering from a quality and Stability stand point, with lesser real pixels as well. Have you done the comparison already that you type that it is just as unstable, ghosty, or artefacts with such certainty and conviction?The difference is that tensor cores are separate, additional hardware. The RPM for AMD runs on the general compute units. Therefore, running a DLSS-alike will use up far more resources that can't be used for the rest of rendering. It's analogous to running raytracing in software as opposed to RT hardware.
DLSS has plenty of drawbacks, it generates just as many artifacts and temporal instabilities as other reconstruction techniques. (More than some, less than others.) Its usefulness derives not from absolute quality, but because it achieves that quality while running primarily on otherwise underused tensor cores. It thus has very little impact on general compute, meaning larger performance improvements versus other methods.
i hadn't thought of that but.... oh yeah
Something like DLSS 2.0 wont do shit when people can't get over prices.
Not to mention with most games using TAA these days they already have temporal artifacts and a "pristine and clean native image" isn't really a thing these days. And it does seem that DLSS can get rid of some of those artifacts, even if it has some of its own like that hair in Death Stranding.You will see in an upcoming video that DLSS 2.0 in a direct comparison with checkerboarding rendering does not produce nearly as many ghosts and instabilities or artefacts, not by a long shot. It in fact crushes checkerboarding rendering from a quality and Stability stand point, with lesser real pixels as well. Have you done the comparison already that you type that it is just as unstable, ghosty, or artefacts with such certainty and conviction?