I am aware and I agree, but making those claims about base 720p was what I meant by exaggeration. 1440p is really excellent.Keep in mind base 720p to 2160p would be a DLSS low set up. The results get much better at base 1080p or 1440p
I am aware and I agree, but making those claims about base 720p was what I meant by exaggeration. 1440p is really excellent.Keep in mind base 720p to 2160p would be a DLSS low set up. The results get much better at base 1080p or 1440p
The AI is so trained in this specific task, that it essentially knows how to fill-in the gaps with a high degree of accuracy.I can't even understand how this works so well. It doesn't make sense that you can have parity with an internal resolution that's not even Full HD. Can someone explain this in the simplest terms possible?
Tahnit
I gave your 4k DLSS 720p idea a go and while it does work and may look fine on screenshots.
The thing you didn't mention is how much worse it is looking in motion than let's say 4k DLSS 1080p.
It does make sense, but I think you are waking a couple of false expectations here. DLSS is awesome, but internal render resolution is not something that can be tuned down without starting to loose significant in the image quality department, at least currently.
So it just boils down to machine learning...The AI is so trained in this specific task, that it essentially knows how to fill-in the gaps with a high degree of accuracy.
This applies to machine learning in general!
So it just boils down to machine learning...
Can next gen consoles do this? I.e. is it something AMD can implement without needing new hardware? Would be a shame if next gen consoles miss out on this
They missed it. The new nvidea card shown 5 days ago is 4x more powerful than ps5. Watch their show!So it just boils down to machine learning...
Can next gen consoles do this? I.e. is it something AMD can implement without needing new hardware? Would be a shame if next gen consoles miss out on this
They missed it. The new nvidea card shown 5 days ago is 4x more powerful than ps5. Watch their show!
patents mean very little especially since we know the PS5 doesn't have any hardware like tensor cores.This new PlayStation 5 patent teases NVIDIA DLSS style technology
Sony's newly published patent teases NVIDIA Deep Learning Super Sampling (DLSS) technology on its next-gen PlayStation 5 console.www.tweaktown.com
maybe it's still possible?
This new PlayStation 5 patent teases NVIDIA DLSS style technology
Sony's newly published patent teases NVIDIA Deep Learning Super Sampling (DLSS) technology on its next-gen PlayStation 5 console.www.tweaktown.com
maybe it's still possible?
So it just boils down to machine learning...
Can next gen consoles do this? I.e. is it something AMD can implement without needing new hardware? Would be a shame if next gen consoles miss out on this
You can still do a version of machine learning upscaling without tensor cores. It was implemented in Control before DLSS 2.0. It obviously doesn't look as good as 2.0 but it looks far better than it would without it.You need specialized hardware. NVIDIA cards have Tensor cores which perform the calculations. I don't think AMD has anything that we know of that can do similar calculations if they've even bothered to train the models in the first place.
You can still do a version of machine learning upscaling without tensor cores. It was implemented in Control before DLSS 2.0. It obviously doesn't look as good as 2.0 but it looks far better than it would without it.
This new PlayStation 5 patent teases NVIDIA DLSS style technology
Sony's newly published patent teases NVIDIA Deep Learning Super Sampling (DLSS) technology on its next-gen PlayStation 5 console.www.tweaktown.com
maybe it's still possible?
Since the XsX has some level of machine learning, I'd guess it's likely the PS5 will have a lesser amount of it because it's the same gpu architecture. So "DLSS" probably could be done, but it would eat up more ms per frame and probably be less worthwhile. The 3080 has something like ~10x the machine learning performance of the XsX, if I read the specs right.
This new PlayStation 5 patent teases NVIDIA DLSS style technology
Sony's newly published patent teases NVIDIA Deep Learning Super Sampling (DLSS) technology on its next-gen PlayStation 5 console.www.tweaktown.com
maybe it's still possible?
Say what now? That's my resolution as well :( Is this even true for 2.0 games?What annoys me about DLSS is that it's very picky about what resolutions it will work for. If I want to use it with my monitor's 4K equivalent (5160x2160), DLSS wont work. But If I switch to 3840x2160, it works just fine. But I don't want to play in 16:9 :(
I tried it in control, which has DLSS 2.0. It's disabled at that resolution.Say what now? That's my resolution as well :( Is this even true for 2.0 games?
Figures. ThanksThis is about scanning objects with a camera and it has nothing to do with DLSS, there was a thread about this with the same misconception and a false title which was corrected later.
https://www.resetera.com/threads/so...scanning-see-threadmark.258075/#post-41091537
This is control 1440p DLSS from only 576p that someone put up on Reddit (guy has also included a 2888p comparison for fun).
He says in motion he can't notice a difference. The detail on the jacket gets a tad softer in DLSS mode but otherwise it's pretty close.
They need hardware modifications. Nvidea has the patent on that. Maybe in 3 years if AMD catches up, but we won't see it in the near future unfortunately on gaming consoles.This new PlayStation 5 patent teases NVIDIA DLSS style technology
Sony's newly published patent teases NVIDIA Deep Learning Super Sampling (DLSS) technology on its next-gen PlayStation 5 console.www.tweaktown.com
maybe it's still possible?
What the fuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuckThis is control 1440p DLSS from only 576p that someone put up on Reddit (guy has also included a 2888p comparison for fun).
He says in motion he can't notice a difference. The detail on the jacket gets a tad softer in DLSS mode but otherwise it's pretty close.
This is control 1440p DLSS from only 576p that someone put up on Reddit (guy has also included a 2888p comparison for fun).
He says in motion he can't notice a difference. The detail on the jacket gets a tad softer in DLSS mode but otherwise it's pretty close.
It truly is magic!!!This is control 1440p DLSS from only 576p that someone put up on Reddit (guy has also included a 2888p comparison for fun).
He says in motion he can't notice a difference. The detail on the jacket gets a tad softer in DLSS mode but otherwise it's pretty close.
Again, RT is less impressive right now because it's still early days and RT in games right now is just bolted on, more software optimizations and hardware improvements need to be made. Look at the marble demo from the recent Nvidia stream to get a taste of the end goal.
Ideally DirectML will be competitive, and that should be vendor-agnostic.If this will turn into some sort of GPU brand feature war it'll just SUCK. Like, this game has AMD marketing and you will only be able to use the (inevitable) AMD equivalent of DLSS with your AMD card and get x-times the performance compared to an Nvidia card.. On the other hand, this other game has Nvidia marketing...
I'm not sure if 2.0 can officially scale above 4K currently - though there's not really any reason why it shouldn't be able to.
In the absolute most basic way, you train an AI on edge patterns.I can't even understand how this works so well. It doesn't make sense that you can have parity with an internal resolution that's not even Full HD. Can someone explain this in the simplest terms possible?
DLSS runs at the end of the frame. You can't reconstruct an image before the low resolution input has been created first.There is a massive performance hit doing DLSS type computations without something like tensor cores. To the point the rewards are not worth the overall loss of normal GPU rendering.
The thing is that DLSS builds up the image over multiple frames - at least eight of them - so if you're standing still it can do a fantastic job reconstructing the image to look just like native.This is control 1440p DLSS from only 576p that someone put up on Reddit (guy has also included a 2888p comparison for fun).
He says in motion he can't notice a difference. The detail on the jacket gets a tad softer in DLSS mode but otherwise it's pretty close.
What are your thoughts on anti-aliasing? Particularly TAA.As soon as you lose *any* detail that a game creator intended to be visible for the sake of atmosphere, then I'm out--and impressively reconstructed textures and signs don't make up for this.
Damn, massive bummer... they really need to address this. I thought arbitrary resolution DLSS support was one the major features of 2.0, but I might misremember. Wasn't following the development too closely. I'm fairly sure I've seen ultra wide DLSS footage though, so I doubt/hope it's not the ARI tried it in control, which has DLSS 2.0. It's disabled at that resolution.
5160x2160. Notice how render resolution is locked even though DLSS is selected.
3840x2160. Works fine.
Your resolution might be too high but 3440x1440 21:9 works with DLSS in Control according to this tweet.I tried it in control, which has DLSS 2.0. It's disabled at that resolution.
5160x2160. Notice how render resolution is locked even though DLSS is selected.
3840x2160. Works fine.
Patents always baffle me, this seems like bullshit considering deep image reconstruction has been a thing for years. How do Sony get to patent this?
Also the technology to do this isn't really mature enough outside of nvidia's ecosystem. I doubt anything will come out of this.
I continue to struggle with DLSS enthusiasm, almost entirely because of how it fails in Death Stranding. Nvidia was very careful to use footage in the game's city zones to show off how DLSS 2.0 reconstructs content in those areas, and does so quite well. But as soon as you get into any predominantly organic landscape (which is a high percentage of the game), and then add rain or other particle effects (particularly in cut scenes), DLSS 2.0 loses a ton of detail. I just checked this again last night (for no reason at all) and it's startling how much detail is lost in an average rainstorm once DLSS is turned on.
I fully believe DLSS will be able to account for a wider range of 3D rendering scenarios. But for now, it's an uncanny valley situation for me. As soon as you lose *any* detail that a game creator intended to be visible for the sake of atmosphere, then I'm out--and impressively reconstructed textures and signs don't make up for this.
Meh, I suppose those who can afford to be on the cutting-edge all the time could hold this opinion.I continue to struggle with DLSS enthusiasm, almost entirely because of how it fails in Death Stranding. Nvidia was very careful to use footage in the game's city zones to show off how DLSS 2.0 reconstructs content in those areas, and does so quite well. But as soon as you get into any predominantly organic landscape (which is a high percentage of the game), and then add rain or other particle effects (particularly in cut scenes), DLSS 2.0 loses a ton of detail. I just checked this again last night (for no reason at all) and it's startling how much detail is lost in an average rainstorm once DLSS is turned on.
I fully believe DLSS will be able to account for a wider range of 3D rendering scenarios. But for now, it's an uncanny valley situation for me. As soon as you lose *any* detail that a game creator intended to be visible for the sake of atmosphere, then I'm out--and impressively reconstructed textures and signs don't make up for this.
Control had a shader version of DLSS that didn't incur a massive performance hit before the upgrade to DLSS 2.0. There's more than one way to do ML supersampling. It didn't look as good as 2.0 but it looked far better than simple upscaling.There is a massive performance hit doing DLSS type computations without something like tensor cores. To the point the rewards are not worth the overall loss of normal GPU rendering.
The previous shader version did have limits though, such as constant ghosting and artifacts near meshed surfaces.Control had a shader version of DLSS that didn't incur a massive performance hit before the upgrade to DLSS 2.0. There's more than one way to do ML supersampling. It didn't look as good as 2.0 but it looked far better than simple upscaling.
I think the biggest hurdle will be training the models and Nvidia has a huge head start in terms of infrastructure.
Aren't there actually PC titles that were AMD partnered but later integrated DLSS? Or am I thinking something else? (Tomb Raider Reboot sequels)I hope AMD is working on it because it's gonna be absolute bullshit if it turns that all the AMD partnered PC games don't support DLSS.
Maybe not an "extra chip", but Tensor architecture is specifically optimized for the kind of sparse neural networks Nvidia's Deep Learning networks use. It's basically an AI coprocessor inside the GPU. Without that optimization, and without specific support for INT32 operations rather than FP32, a generic GPU or CPU would waste too much power trying to run this DLSS process.Its not an extra chip. It is just an execution block inside the GPU. Doing this off die would absolutely kill any efficiency gains for anything but pure inference compute.
Again, RT is less impressive right now because it's still early days and RT in games right now is just bolted on, more software optimizations and hardware improvements need to be made. Look at the marble demo from the recent Nvidia stream to get a taste of the end goal.
Just saying "DLSS is more impressive" because of where RT currently is is kind of missing the forest for the trees.
I should point out that that the CUs do support int32. Now for NN we are actually talking about even lower precision, either fp16 or int8.Maybe not an "extra chip", but Tensor architecture is specifically optimized for the kind of sparse neural networks Nvidia's Deep Learning networks use. It's basically an AI coprocessor inside the GPU. Without that optimization, and without specific support for INT32 operations rather than FP32, a generic GPU or CPU would waste too much power trying to run this DLSS process.
That is fascinating... it's basically rendering what was lost — in real time — and adding it to the frame...Ideally DirectML will be competitive, and that should be vendor-agnostic.
At least that way, even if it's not quite as good as DLSS, you get something rather than nothing at all, in AMD-sponsored games.
I'm not sure if 2.0 can officially scale above 4K currently - though there's not really any reason why it shouldn't be able to.
2.1 adds a 9x scaling option which makes that easier to render by enabling 1440p to 8K scaling rather than only 2160p to 8K.
In the absolute most basic way, you train an AI on edge patterns.
It knows that when it sees a low resolution pattern which follows this pixel structure and brightness:
The original high resolution image it was derived from looked like this:
And then the AI tries to transform the top image to look more like the bottom one.
This works because the AI is trained on huge datasets which compares very high resolution images against the exact same image rendered at a low resolution.
So it learns the difference that resolution makes to an otherwise identical scene, and figures out the way to reverse it; starting with a low resolution input and turning it into a high resolution output.
Of course it is far more complicated than that, but that's the most basic way I can think to explain it.
DLSS runs at the end of the frame. You can't reconstruct an image before the low resolution input has been created first.
Because of that, I think the requirement for tensor cores is overstated. Tensor cores run the reconstruction faster, but it's not stealing resources away from the rest of the GPU to run this type of reconstruction on shaders like RDNA 2.0 is said to - they have already finished most of their work.
The thing is that DLSS builds up the image over multiple frames - at least eight of them - so if you're standing still it can do a fantastic job reconstructing the image to look just like native.
The lower the base resolution is, the more resolution you lose as soon as things start moving.
Now in some respects this is ideal for modern displays, since they blur the image as soon as anything is moving. But I do wonder whether this aspect of DLSS would be far more noticeable on an OLED running at 120Hz with BFI enabled for example. That display would have significantly less motion blur, and be more revealing of this aspect of DLSS, while a sample-and-hold LCD monitor will blur the image so much you may not notice it.
What are your thoughts on anti-aliasing? Particularly TAA.
Geforce fx?? Time to forgive maybe? 🤣I'd like to see it in person as screenshots seem like they have an aggressive sharpening filter.
But I will never buy nvidia again after I had 3 Geforce fx crap the bed in the space of 6 months
What annoys me about DLSS is that it's very picky about what resolutions it will work for. If I want to use it with my monitor's 4K equivalent (5160x2160), DLSS wont work. But If I switch to 3840x2160, it works just fine. But I don't want to play in 16:9 :(