• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.
  • We have made minor adjustments to how the search bar works on ResetEra. You can read about the changes here.

z1ggy

Member
Oct 25, 2017
4,194
Argentina
Ray tracing is not worth it at all on any platform, PC included. It's the latest buzzword from Nvidia to sell overpriced cards to PC idiots race. It was basically impossible to make the whales pay $1000 or $1500 a new graphics card without some kind of justification. RTX is that, a giant waste of GPU power.

In the real world, you have tons of options to get nearly the same lighting results while paying a lot less in GPU power. But then, how do you get the money from PC whales ?
I paid 250$ for my RTX 2060 card, i would say it was worth it after playing Metro, Quake 2 RTX and Control. And i have never seen lightning/reflections like with RT on.

God bless DLSS 2.0
 

JeffGubb

Giant Bomb
Verified
Oct 25, 2017
842
Sure they can but they don't have dedicated hardware like DLSS2.0 so you're always using resources that could be used somewhere else.

This is circular. If you are using resources to improve performance, then you are freeing up resources. But I would bet that Microsoft definitely considered machine-learning for resolution upscaling when designing the new Xbox. I'd also bet Cerny and his team considered it as well.
 

MajesticSoup

Banned
Feb 22, 2019
1,935
Yeah, but DLSS 1.9 looked terrible, had to be "trained" by the devs ahead of time, gave DLSS a bad name and thats why they moved DLSS to the Tensor cores with 2.0.

So, why would the consoles want to replicate that?
DLSS 1.0 used all tensor cores and looked worse than regular upscaling. My point is its still "Deep learning super sampling" and didnt require any specialized hardware. We dont know exactly the changes between 1.9 and 2.0, how much could be achieved with or without tensor cores.
 

Slaythe

The Wise Ones
Member
Oct 25, 2017
15,866
No. Horrible idea and it will hinder next gen until Ps5 pro and co that come with dedicated AI upscale.

The insane performance gain with all the bells and whistles is just unmatched. And UE5 will take care of dynamic lightning solutions without full RTX requirement as well.

(and if casuals can't tell the difference between 30 fps and 60 fps, they'll never see the difference between AI upscale and native)
 

J-Skee

The Wise Ones
Member
Oct 25, 2017
11,114
They called them screen space reflections.
It's a clever mix of screen space and environment specific cube maps matched up really well so that when the screen space reflections get obscured and disappear the cube map takes its place.
Screen Space reflections, combined with talented artists and clever use of cube maps.

It's a pretty good technique since it's relatively 'cheap' compared to more sophisticated forms of creating reflections, though it comes with caveats like reflections being much lower resolution (usually a quarter or less of whatever is being reflected).
They were great, weren't they? DF talked about it - mix of SSR, then blended into cube/reflection maps at more acute angles when the SSR would start to break down.

In the best cases its almost seamless unless you're really looking for it, and you can see reflections of things not on screen. But sometimes the environment baked into the reflection will be lower res, or slightly off alignment and then it is more noticable.

But considering thats on a PS4, I'd totally take an adaptation of that in specific areas where full RT isn't completely necessary. Free up some of that performance to put where you need it more.
Thank you for the explanations! I haven't see ray tracing in person yet, but they definitely made me question how the hell they were doing those reflections. It's going to be exciting what devs can do with it when it comes to next-gen.
 

Niks

Member
Oct 25, 2017
3,300
There is no inherent performance cost to ray tracing. You can use it in such ways which won't hit performance more than what it will substitute visually.

What?
I do not think this is accurate? Any kind of RT will have a greater cost than vanilla GI lighting/reflections.
 

Piggus

Member
Oct 27, 2017
4,700
Oregon
Ray tracing is not worth it at all on any platform, PC included. It's the latest buzzword from Nvidia to sell overpriced cards to PC idiots race. It was basically impossible to make the whales pay $1000 or $1500 a new graphics card without some kind of justification. RTX is that, a giant waste of GPU power.

In the real world, you have tons of options to get nearly the same lighting results while paying a lot less in GPU power. But then, how do you get the money from PC whales ?

lol give me a break. The difference between the best screen-space reflections and ray-traced reflections is huge. It also cuts down on dev time because you don't have to strategically place cubemaps everywhere. We already saw a number of PS5 games running at native 4k with ray-traced reflections.
 

badabeezy

Member
Oct 27, 2017
195
Ray tracing is not worth it at all on any platform, PC included. It's the latest buzzword from Nvidia to sell overpriced cards to PC idiots race. It was basically impossible to make the whales pay $1000 or $1500 a new graphics card without some kind of justification. RTX is that, a giant waste of GPU power.

In the real world, you have tons of options to get nearly the same lighting results while paying a lot less in GPU power. But then, how do you get the money from PC whales ?
The improvements are also on the developer side. From DF videos it seems that the ability to place a light source and have it "work" as it would in the real world is a time savings for development. I am sure the real advantages won't be seen for a little while yet, but it is clear that ray tracing is the future of graphics.
 

Jedi2016

Member
Oct 27, 2017
15,729
How do we know that they don't have something akin to DLSS built into the new systems? There's a LOT they haven't shown us yet, remember.
 

dgrdsv

Member
Oct 25, 2017
11,886
What?
I do not think this is accurate? Any kind of RT will have a greater cost than vanilla GI lighting/reflections.
It is accurate. There are several possibilities where using RT instead of rasterization would net the same performance or even be faster. There is no "vanilla" anything in modern rendering - even untextured polygons can be rendered differently these days.

People seem to not understand that RT isn't an "effect", it's a rendering method.
 

BeI

Member
Dec 9, 2017
5,986
I wonder if raytracing will eventually become the new "pop-in" on console versions. Like you move 5ft away from something and it switches from raytracing to a cube map or something.
 

zma1013

Member
Oct 27, 2017
7,687
I wonder if raytracing will eventually become the new "pop-in" on console versions. Like you move 5ft away from something and it switches from raytracing to a cube map or something.

At least with the Battlefield V implementation, there was a distant limit to the reflections where everything just disappears, although I don't think that game was built from the ground up with raytracing in mind but something seemingly tacked on later.
 

Rpgmonkey

Member
Oct 25, 2017
1,348
I'd say we're already seeing some of the ways developers will be getting it to work:

- Don't run at native 4K (or 60FPS for that matter in some cases)
- Set thresholds on material properties or distances for when to use RT and when to fall back to a less-expensive method
- Don't do things like display reflections at full resolution
- Strategically accept a bit of noise in exchange for better performance
- Disable it on certain objects
- Avoid using it in non-optimal scenarios (accurate reflections on the River Thames in Watch Dogs for example would require somewhat diffuse reflections on a very large surface, which could be quite expensive for an open world game)

Probably more I can't think of or I'm not aware of. Ray Tracing (in this context) is early and the consoles aren't going to be magic, just have to make various compromises to get a result that works. If they manage a result that looks better and/or saves a bit of dev time then it probably was an overall success. It's not inherently unbearably slow, but we're a ways off from hardware that allows it to be casually thrown around everywhere.
 

plagiarize

It's not a loop. It's a spiral.
Moderator
Oct 25, 2017
27,569
Cape Cod, MA
Absolutely.

With a fixed hardware design to target, no reason you can't figure out how to spends the rays available to you to do things like fix glaring SSR artifacts, etc.
 

Gitaroo

Member
Nov 3, 2017
8,015
I'm sure 2 years from now there will be much cheaper software solution available and runs on all hardware. May not be the best but look at cryengine RT solution, pretty conviencing and runs on any hardware. I am surprised Sony didn't have a image scaling solution for ps5 at all seeing how good their other products has such great digital imaging capability and how muhh they talk about A. I and Ps4 pro was build with upscaling in mind. Don't want to say secret sauce but is there something more that they haven't talk about? They did listed machine learning in their earlier slide.
 

Lump

One Winged Slayer
Member
Oct 25, 2017
16,039
Raytracing is great for the games that support it. DLSS makes it worthwhile on the current generation of RTX cards. The real problem right now isn't the hardware, it's just the low number of titles that take advantage of proper raytracing as well as DLSS 2.0.
 

cgpartlow

Member
Oct 27, 2017
3,006
Seattle, WA
I mean Microsoft at least has been experimenting with DirectML. I am not sure how well it works performance wise to DLSS but I am guessing it will help performance. I think Ratchent and Clank looked amazing with the raytracing they showed in their gameplay demo and that game was running in native 4K.


Image of DirectML working via overlock3d.net
05101851486l.png


Link to article: https://www.overclock3d.net/news/so...on_game-changer_that_nobody_s_talking_about/1
 

Vinx

Member
Sep 9, 2019
1,419
DLSS 1.0 used all tensor cores and looked worse than regular upscaling. My point is its still "Deep learning super sampling" and didnt require any specialized hardware. We dont know exactly the changes between 1.9 and 2.0, how much could be achieved with or without tensor cores.
DLSS 1.0 did not use the Tensor cores it used shaders.

And we do know the changes between DLSS 1.9 and 2.0.
 

mugurumakensei

Elizabeth, I’m coming to join you!
Member
Oct 25, 2017
11,330
My point is it's not worth it at all. The GPU power cost is gigantic compared to the small gain in lighting. It's just stupid at this point. You pay $1500 for a nice looking puddle sometimes, it's just ridiculous.

for the Devs, it also means spending less time on lighting as, once you've decided the lighting model, the hardware does it for you. Thus, easy to integrate into editors and no need for spending owners manually baking lighting to make it look good enough.

this isn't even getting into things that can only be done with Raytraced lighting such as physically accurate reflections of off-screen geometry.
 
OP
OP
Sems4arsenal

Sems4arsenal

Member
Apr 7, 2019
3,627
for the Devs, it also means spending less time on lighting as, once you've decided the lighting model, the hardware does it for you. Thus, easy to integrate into editors and no need for spending owners manually baking lighting to make it look good enough.

this isn't even getting into things that can only be done with Raytraced lighting such as physically accurate reflections of off-screen geometry.

I really doubt any AAA will use path trace lighting anytime soon.

Massive performance cost.
 

mugurumakensei

Elizabeth, I’m coming to join you!
Member
Oct 25, 2017
11,330
I really doubt any AAA will use path trace lighting anytime soon.

Massive performance cost.

hmm I disagree. I think each AAA game will be strategic especially those with in-house engines. They will find which element of Ray-tracing makes the game pop best while hitting their performance target (this will be 30fps [not locked]) for many titles with a goal to have at least one title for each Ray-traced feature to ensure their engines are forward compatible for the day when it's possible to use all.
 

Trup1aya

Literally a train safety expert
Member
Oct 25, 2017
21,390
Sure they can but they don't have dedicated hardware like DLSS2.0 so you're always using resources that could be used somewhere else.

The DLSS2.0 process occurs in series with the rest of the rendering pipeline, not in parallel. The dedicated hardware doesnt allow you to simultaneously use other resources elsewhere. The GPU process the scene, the tensor cores do the upscaling work, then the GPU does its post work.

MS' solution will have the same work flow, but will a bit slower.
 
Last edited:

Madjoki

Member
Oct 25, 2017
7,230
I mean Microsoft at least has been experimenting with DirectML. I am not sure how well it works performance wise to DLSS but I am guessing it will help performance. I think Ratchent and Clank looked amazing with the raytracing they showed in their gameplay demo and that game was running in native 4K.


Image of DirectML working via overlock3d.net
05101851486l.png


Link to article: https://www.overclock3d.net/news/so...on_game-changer_that_nobody_s_talking_about/1

DirectML would just be API that manufactuers can implment. It says image source is Nvidia, so most likely DLSS via DirectML.
 

Dictator

Digital Foundry
Verified
Oct 26, 2017
4,932
Berlin, 'SCHLAND
DLSS 1.0 did not use the Tensor cores it used shaders.

And we do know the changes between DLSS 1.9 and 2.0.
DLSS 1.0 used the tensor cores.

DLSS 1.9 was a single version of it that was a proto 2.0 running on normal ALUs without any machine learning at all, it was a hand tailored thing.

DLSS 2.0 uses tensor cores but is a different approach entirely to DLSS 1.0. It is also cheaper.
 

Deleted member 34714

User requested account closure
Banned
Nov 28, 2017
1,617
I hope both platforms are looking into it seriously. So far my main concern is that both consoles are being sold with barely ANY RT feature talk other than "we support RT." I only know sony is attemping it so far, hope MS shows if off at their 2nd event. I just don't think Nvidia alone with RTX and DLSS can push RT alone.
 

GMM

Banned
Oct 27, 2017
5,484
Like anything in real-time rendering, it's all about compromise to reach what appears to be a better visual output, modern day ray-tracing is no exception.

Ray-tracing as we see it in games is true ray-tracing like what we see in path-traced renderers like V-Ray, it's calculating lights hitting surfaces according to how it happens in the real world based off of physical values, but even my RTX 2080 Ti doesn't have the power to do this at an unlimited amount of samples or bounce rate.

Typically ray-traced reflections for a real-time renderer like Unreal Engine 4 would only do a ray tracing calculation for a pixel if the material of the surface had a roughness of something like 0.7 or higher, all pixels below that would fall over to screen space reflections or prebaked reflection probes. Another way to improve speed would be to test if the pixel reflection is present in the screen space reflection map already calculated, using the SSR result instead of calculating an RT reflection for that pixel.

Now let's presume that we need to do an RT calculation for that pixel, how many reflection bounces should we do if the surface we hit in the reflection also has a roughness value higher than 0.7? The more bounces we need to do, the more compute power we need.

Now it's also really computationally heavy to do this for every pixel on screen, so what if we only calculate a limited amount of RT pixels every frame and temporally reconstruct them over 3-5 frames to make it even faster at the cost of ghosting artifacts if the viewport changes significantly?

However developers chose to implement RT features, it will always be about compromising until they reach the best performance to visual output ratio, upscaling algorithms like DLSS is just another way of compromising, RT can be implemented in many many ways.
 

cgpartlow

Member
Oct 27, 2017
3,006
Seattle, WA
DirectML would just be API that manufactuers can implment. It says image source is Nvidia, so most likely DLSS via DirectML.
You are right although the article states it is from "This talk, which is entitled "Deep Learning for Real-Time Rendering: Accelerating GPU Inferencing with DirectML and DirectX 12" showcases Nvidia hardware upscaling Playground Games' Forza Horizon 3 from 1080p to 4K using DirectML in real-time. " And it doesn't directly mention DLSS.
 

7thFloor

Member
Oct 27, 2017
6,647
U.S.
DLSS 1.0 used the tensor cores.

DLSS 1.9 was a single version of it that was a proto 2.0 running on normal ALUs without any machine learning at all, it was a hand tailored thing.

DLSS 2.0 uses tensor cores but is a different approach entirely to DLSS 1.0. It is also cheaper.
Is DLSS 1.9 possible on 10 series then?
 

MrKlaw

Member
Oct 25, 2017
33,076
Is DLSS 1.9 possible on 10 series then?

I'd say even something like 2.0 would be possible. MS has INT8/16 support so you could probably simulate Tensor cores (slowly) using INT16? I think a something that is similar in quality/perfrmance to DLSS2.0 will be possible in the next few yeras.
 

Veliladon

Member
Oct 27, 2017
5,559
Is DLSS 1.9 possible on 10 series then?

It's *possible* but not necessarily useful. The thing about Tensor Cores is they do a full 4x4 matrix FMA (d = a * b + c) in one go. The Pascal implementation needs to do it step by step which requires 12x as much math to be performed because they can only operate on one row at a time.

RTX Voice for instance, seems like a simple task for ML. On my Pascal 1080 Ti it takes up like 10% of the card's compute power. Just to apply noise reduction to audio through ML.
 

7thFloor

Member
Oct 27, 2017
6,647
U.S.
It's *possible* but not necessarily useful. The thing about Tensor Cores is they do a full 4x4 matrix FMA (d = a * b + c) in one go. The Pascal implementation needs to do it step by step which requires 12x as much math to be performed because they can only operate on one row at a time.

RTX Voice for instance, seems like a simple task for ML. On my Pascal 1080 Ti it takes up like 10% of the card's compute power. Just to apply noise reduction to audio through ML.
I see, that's too bad
 

LCGeek

Member
Oct 28, 2017
5,857
I've said this numerous times I'm glad some of you weren't aroud for the late 90's or early part of the start of century.

AA would've gotten nowhere with this attitiude.

I say devs can do what they want if the framepacing above 30fps.

I don't care if the game looks like quake 1, rdr2, or tlou2 its the content that ultimately matters.

I want ray tracing after see so many decent global illimination hacks, minecraft rtx, quake 2 rtx, and metro I'd rather not go back to flat non dyanmic lighting to say the least. Devs will find a sweet spot.
 

JahIthBer

Member
Jan 27, 2018
10,383
RT reflections are not that demanding at all if done right, 1080 Ti can run RT BFV reflections at decent performance. It's RT GI like in Control, Quake 2, Minecraft & Exodus that is demanding as all fuck, we probably won't see that much this gen anyway, it's hard to fake good real time reflections but you can fake good lighting.
 

EggmaniMN

Banned
May 17, 2020
3,465
Some people around here have a really weird hangup with ray tracing. Yes it's extremely noticeable, yes it can use a lot of resources if you're doing crazy stuff, yes that will change as we continue through the generation. But there's more to it than full on path tracing and there's more to visuals than RAW POWER. Ray traced sound will be massive too and I'm sure Cerny's sound focus will show that as well.
 

Tora

The Enlightened Wise Ones
Member
Jun 17, 2018
8,640
These new consoles and their use of ray tracing will be transformative by 2023. Let the devs play a bit, we're gonna see some cool shit.
Ding ding

Same thing happens every gen, people are a bit underwhelmed but year 3 is usually when things are in full swing. Of course there's still gems that come out before then.
 

Veliladon

Member
Oct 27, 2017
5,559
I'd say even something like 2.0 would be possible. MS has INT8/16 support so you could probably simulate Tensor cores (slowly) using INT16? I think a something that is similar in quality/perfrmance to DLSS2.0 will be possible in the next few yeras.

GP104 only has one Vec2 FP16*2 per 128 FP32 ALUs. So while you can run them using INT16 or INT8, it's all promoted to FP32 and only works on FP32 pipes with no packed math speed up. The only reason GP104 even has a Vec2 FP16*2 ALU per SM is for running FP16 code which *must* be run natively and not emulated by promoting to FP32.

Turing on the other hand has Tensor Cores which do the work for them.
 

kaputt

Member
Oct 27, 2017
1,205
I don't think the current cost is worth it.

Devs can, of course, find a cheaper and more efficient way to do it. For example, Anti-aliasing was such a hindrance during the PS360 era, but now everything looks so freaking sharp with Temporal AA tecnhiques and it is way less demanding than MSAA, as far as I know.
 

MrKlaw

Member
Oct 25, 2017
33,076
GP104 only has one Vec2 FP16*2 per 128 FP32 ALUs. So while you can run them using INT16 or INT8, it's all promoted to FP32 and only works on FP32 pipes with no packed math speed up since there's only one Vec2 FP16*2 ALU per SM and it's only there as a fallback for running FP16 code which *must* be run natively and not emulated by promoting to FP32.

Turing on the other hand has Tensor Cores which do the work for them.

Right but I thought MS mentioned for RDNA2 they have INT16/8 packed?
 

orava

Alt Account
Banned
Jun 10, 2019
1,316
Console warrior's very well informed opinions about ray tracing are always a delight to read.