I paid 250$ for my RTX 2060 card, i would say it was worth it after playing Metro, Quake 2 RTX and Control. And i have never seen lightning/reflections like with RT on.Ray tracing is not worth it at all on any platform, PC included. It's the latest buzzword from Nvidia to sell overpriced cards to PC idiots race. It was basically impossible to make the whales pay $1000 or $1500 a new graphics card without some kind of justification. RTX is that, a giant waste of GPU power.
In the real world, you have tons of options to get nearly the same lighting results while paying a lot less in GPU power. But then, how do you get the money from PC whales ?
Sure they can but they don't have dedicated hardware like DLSS2.0 so you're always using resources that could be used somewhere else.
DLSS 1.0 used all tensor cores and looked worse than regular upscaling. My point is its still "Deep learning super sampling" and didnt require any specialized hardware. We dont know exactly the changes between 1.9 and 2.0, how much could be achieved with or without tensor cores.Yeah, but DLSS 1.9 looked terrible, had to be "trained" by the devs ahead of time, gave DLSS a bad name and thats why they moved DLSS to the Tensor cores with 2.0.
So, why would the consoles want to replicate that?
It's a clever mix of screen space and environment specific cube maps matched up really well so that when the screen space reflections get obscured and disappear the cube map takes its place.
Screen Space reflections, combined with talented artists and clever use of cube maps.
It's a pretty good technique since it's relatively 'cheap' compared to more sophisticated forms of creating reflections, though it comes with caveats like reflections being much lower resolution (usually a quarter or less of whatever is being reflected).
Thank you for the explanations! I haven't see ray tracing in person yet, but they definitely made me question how the hell they were doing those reflections. It's going to be exciting what devs can do with it when it comes to next-gen.They were great, weren't they? DF talked about it - mix of SSR, then blended into cube/reflection maps at more acute angles when the SSR would start to break down.
In the best cases its almost seamless unless you're really looking for it, and you can see reflections of things not on screen. But sometimes the environment baked into the reflection will be lower res, or slightly off alignment and then it is more noticable.
But considering thats on a PS4, I'd totally take an adaptation of that in specific areas where full RT isn't completely necessary. Free up some of that performance to put where you need it more.
There is no inherent performance cost to ray tracing. You can use it in such ways which won't hit performance more than what it will substitute visually.
Ray tracing is not worth it at all on any platform, PC included. It's the latest buzzword from Nvidia to sell overpriced cards to PC idiots race. It was basically impossible to make the whales pay $1000 or $1500 a new graphics card without some kind of justification. RTX is that, a giant waste of GPU power.
In the real world, you have tons of options to get nearly the same lighting results while paying a lot less in GPU power. But then, how do you get the money from PC whales ?
The improvements are also on the developer side. From DF videos it seems that the ability to place a light source and have it "work" as it would in the real world is a time savings for development. I am sure the real advantages won't be seen for a little while yet, but it is clear that ray tracing is the future of graphics.Ray tracing is not worth it at all on any platform, PC included. It's the latest buzzword from Nvidia to sell overpriced cards to PC idiots race. It was basically impossible to make the whales pay $1000 or $1500 a new graphics card without some kind of justification. RTX is that, a giant waste of GPU power.
In the real world, you have tons of options to get nearly the same lighting results while paying a lot less in GPU power. But then, how do you get the money from PC whales ?
It is accurate. There are several possibilities where using RT instead of rasterization would net the same performance or even be faster. There is no "vanilla" anything in modern rendering - even untextured polygons can be rendered differently these days.What?
I do not think this is accurate? Any kind of RT will have a greater cost than vanilla GI lighting/reflections.
Already happening this gen with SSRs and dynamic lights, nothing new with RT here.I wonder if raytracing will eventually become the new "pop-in" on console versions. Like you move 5ft away from something and it switches from raytracing to a cube map or something.
I wonder if raytracing will eventually become the new "pop-in" on console versions. Like you move 5ft away from something and it switches from raytracing to a cube map or something.
DLSS 1.0 did not use the Tensor cores it used shaders.DLSS 1.0 used all tensor cores and looked worse than regular upscaling. My point is its still "Deep learning super sampling" and didnt require any specialized hardware. We dont know exactly the changes between 1.9 and 2.0, how much could be achieved with or without tensor cores.
My point is it's not worth it at all. The GPU power cost is gigantic compared to the small gain in lighting. It's just stupid at this point. You pay $1500 for a nice looking puddle sometimes, it's just ridiculous.
for the Devs, it also means spending less time on lighting as, once you've decided the lighting model, the hardware does it for you. Thus, easy to integrate into editors and no need for spending owners manually baking lighting to make it look good enough.
this isn't even getting into things that can only be done with Raytraced lighting such as physically accurate reflections of off-screen geometry.
I really doubt any AAA will use path trace lighting anytime soon.
Massive performance cost.
Sure they can but they don't have dedicated hardware like DLSS2.0 so you're always using resources that could be used somewhere else.
I mean Microsoft at least has been experimenting with DirectML. I am not sure how well it works performance wise to DLSS but I am guessing it will help performance. I think Ratchent and Clank looked amazing with the raytracing they showed in their gameplay demo and that game was running in native 4K.
Image of DirectML working via overlock3d.net
Link to article: https://www.overclock3d.net/news/so...on_game-changer_that_nobody_s_talking_about/1
Yup me too.Yeah, I'd honestly rather have all the resources pumped into better image quality or frame rate.
DLSS 1.0 used the tensor cores.DLSS 1.0 did not use the Tensor cores it used shaders.
And we do know the changes between DLSS 1.9 and 2.0.
You are right although the article states it is from "This talk, which is entitled "Deep Learning for Real-Time Rendering: Accelerating GPU Inferencing with DirectML and DirectX 12" showcases Nvidia hardware upscaling Playground Games' Forza Horizon 3 from 1080p to 4K using DirectML in real-time. " And it doesn't directly mention DLSS.DirectML would just be API that manufactuers can implment. It says image source is Nvidia, so most likely DLSS via DirectML.
RT on consoles will be much more efficient than the current Turing brute-force implementations.
Is DLSS 1.9 possible on 10 series then?DLSS 1.0 used the tensor cores.
DLSS 1.9 was a single version of it that was a proto 2.0 running on normal ALUs without any machine learning at all, it was a hand tailored thing.
DLSS 2.0 uses tensor cores but is a different approach entirely to DLSS 1.0. It is also cheaper.
Yeah, the brute force approach that we see on first gen RTX compared to the much better hybrid RT software that we'll see on next-gen consoles.
I see, that's too badIt's *possible* but not necessarily useful. The thing about Tensor Cores is they do a full 4x4 matrix FMA (d = a * b + c) in one go. The Pascal implementation needs to do it step by step which requires 12x as much math to be performed because they can only operate on one row at a time.
RTX Voice for instance, seems like a simple task for ML. On my Pascal 1080 Ti it takes up like 10% of the card's compute power. Just to apply noise reduction to audio through ML.
Ding dingThese new consoles and their use of ray tracing will be transformative by 2023. Let the devs play a bit, we're gonna see some cool shit.
The thing is, it's impressive tech by all means, but one can't help but feel that it simply isn't worth the performance cost.
I'd say even something like 2.0 would be possible. MS has INT8/16 support so you could probably simulate Tensor cores (slowly) using INT16? I think a something that is similar in quality/perfrmance to DLSS2.0 will be possible in the next few yeras.
GP104 only has one Vec2 FP16*2 per 128 FP32 ALUs. So while you can run them using INT16 or INT8, it's all promoted to FP32 and only works on FP32 pipes with no packed math speed up since there's only one Vec2 FP16*2 ALU per SM and it's only there as a fallback for running FP16 code which *must* be run natively and not emulated by promoting to FP32.
Turing on the other hand has Tensor Cores which do the work for them.
Right but I thought MS mentioned for RDNA2 they have INT16/8 packed?