• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.
Status
Not open for further replies.

TheNerdyOne

Member
Oct 28, 2017
521
I'm just very glad that both new consoles (and upcoming amd gpus) support this so people can stop pretending it isn't great due to platform preference etc.

Whatever the performance difference may be. RT will play a key part in next gen engines etc.

agree 100%, RT makes developers lives a lot easier apparently, and anything that can help reduce crunch and other industry nonsense is great. on average this should also lead to more games looking awesome. While its true that top tier games using traditional workarounds can achieve results basically almost as good as what RT is doing already, the time and budget that costs is not feasible for 99% of games. RT is great and i'm glad its going to be a thing now.
 

eonden

Member
Oct 25, 2017
17,078
on paper is about all we can do right now. and DF comparing a version of minecraft 1 dude did in his spare time over a month vs an entire team spending many months and many millions is about as apples and oranges as it can get too, but nobody's bitching about that
You are the one that is trying to use stupid comparisons when the guy that actually experienced both of them says that is not the case.

Edit: you do not compare different architectures and solutions on an "on paper" to dictate which the best solution is because the paper has no real correlation with the reality / results. And in these kind of cases, results are the important stuff.
 

TheNerdyOne

Member
Oct 28, 2017
521
You are the one that is trying to use stupid comparisons when the guy that actually experienced both of them says that is not the case:

nvidia themselves said the 2080Ti is sub60, until we have a video proving otherwise, that figure stands no matter what digital foundry has to say about the matter. every video we have of minecraft RTX shows it running sub60 at 1080p, all of them, i'd be happy to link a few. And then he's saying that its a fair apples to apples comparison to a tech demo made by one man, which is patently absurd.
 

EvilBoris

Prophet of Truth - HDTVtest
Verified
Oct 29, 2017
16,680
if my math is flawed, show me the correct formula, otherwise you're just deliberately trying to start a fight.

Math Fight!
scientist-street-fighter-game-pixel-art-animation-by-diego-sanches-19.gif
 

eonden

Member
Oct 25, 2017
17,078
nvidia themselves said the 2080Ti is sub60, until we have a video proving otherwise, that figure stands no matter what digital foundry has to say about the matter. every video we have of minecraft RTX shows it running sub60 at 1080p, all of them, i'd be happy to link a few. And then he's saying that its a fair apples to apples comparison to a tech demo made by one man, which is patently absurd.
I dunno, maybe Digital Foundry, which is the one that has seen both solutions on hand is actually the one that can make a good correlation instead of using a flawed model of looking at specs and thinking its all that matters, despite Nvidia generally trouncing all AMD solutions at "same" spec sheets.
 

Alexandros

Member
Oct 26, 2017
17,800
no, i'm absolutely certain that on paper it can do ~3.6x as many intersections per second. in the real world all bets are off, but my formula was never going to give you real world performance, just like the formula for flops doesnt tell you squat about real world performance. its all just raw peak on paper figures. the math my formula uses arrives at exactly the XSXs confirmed intersects/second figure (380 billion) and the formula can also accurately arrive at the RT performance figure nvidia quotes for every single one of its gpus. People keep saying "youre wrong", or being hostile and trying to stir up shit but to date none of them have provided a different formula that accurately arrives at the figures from either company let alone both of them.

Ok then, I will respect the effort you put into it even though I don't believe that real-world performance will be anywhere close to that.
 

TheNerdyOne

Member
Oct 28, 2017
521
I dunno, maybe Digital Foundry, which is the one that has seen both solutions on hand is actually the one that can make a good correlation instead of using a flawed model of looking at specs and thinking its all that matters, despite Nvidia generally trouncing all AMD solutions at "same" spec sheets.

this isn't my first time disagreeing with (and subsequently getting fixed) incorrect information at DF. they had a midrange gpu roundup months ago and despite the RX580 winning all but 1 of their tests, reccomended the 1060 instead, and i contacted john and was like, this aint correct, and it was fixed. So uh, digital foundry are not infallible sir, they make mistakes like anyone else. Furthermore, on paper specs is all we have until a multiplatform game is made for the XSX utilizing the hardware features properly, and we see how that compares to a 2080Ti running that software on pc.

We do have confirmation from the lead technical director of the coalition that gears5 runs at over 100fps at 4k, with multiple settings beyond pc ultra. So at the very least its 2080Ti tier for raster in that game. We'll probably see a lot more soon enough.
 

eonden

Member
Oct 25, 2017
17,078
this isn't my first time disagreeing with (and subsequently getting fixed) incorrect information at DF. they had a midrange gpu roundup months ago and despite the RX580 winning all but 1 of their tests, reccomended the 1060 instead, and i contacted john and was like, this aint correct, and it was fixed. So uh, digital foundry are not infallible sir, they make mistakes like anyone else. Furthermore, on paper specs is all we have until a multiplatform game is made for the XSX utilizing the hardware features properly, and we see how that compares to a 2080Ti running that software on pc.

We do have confirmation from the lead technical director of the coalition that gears5 runs at over 100fps at 4k, with multiple settings beyond pc ultra. So at the very least its 2080Ti tier for raster in that game. We'll probably see a lot more soon enough.
Ok then when the RT performance is not 4x as better as the 2080Ti I expect you to come here and say paper performance has no meaning.

Edit: do you have link to that "over 100 fps at 4k beyond ultra"? Because uuuh that doesnt sound realistic at all lmao.
 

RingRang

Alt account banned
Banned
Oct 2, 2019
2,442
On the one hand I'm thrilled the next consoles are going to have ray tracing as it's an amazing technology to see in games.

On the other hand this Minecraft video kind of confirms my biggest fears when it comes to a first generation implementation on consoles. Here is a game that is as basic as them come graphically, and it's not even able to hit 60fps at 1080p with high quality ray tracing enabled.

We're finally going to have these incredibly powerful and balanced consoles I'm afraid we're going to see developers sacrificing frame rate just so they can show off that sweet ray tracing tech. I really hope any game that does this gives us the option to play at 60fps with ray tracing turned off.
 

eonden

Member
Oct 25, 2017
17,078
On the one hand I'm thrilled the next consoles are going to have ray tracing as it's an amazing technology to see in games.

On the other hand this Minecraft video kind of confirms my biggest fears when it comes to a first generation implementation on consoles. Here is a game that is as basic as them come graphically, and it's not even able to hit 60fps at 1080p with high quality ray tracing enabled.

We're finally going to have these incredibly powerful and balanced consoles I'm afraid we're going to see developers sacrificing frame rate just so they can show off that sweet ray tracing tech. I really hope any game that does this gives us the option to play at 60fps with ray tracing turned off.
The raytracing they chose to use is probably one of the most expensive as well as minecraft (despite its looks) have a ton of interaction elements that would be active with raytracing if they were to be activated. Other games have much more controlled environments were the surfaces (and objects) were raytracing is active are much smaller (even then you will have performance problems).
 

TheNerdyOne

Member
Oct 28, 2017
521
Ok then when the RT performance is not 4x as better as the 2080Ti I expect you to come here and say paper performance has no meaning.

Edit: do you have link to that "over 100 fps at 4k beyond ultra"? Because uuuh that doesnt sound realistic at all lmao.


news.xbox.com

Xbox Series X: A Closer Look at the Technology Powering the Next Generation - Xbox Wire

A few months ago, we revealed Xbox Series X, our fastest, most powerful console ever, designed for a console generation that has you, the player, at its center. When it is released this holiday season, Xbox Series X will set a new bar for performance, speed and compatibility, all while allowing...


To close out the segment on the power of Xbox Series X, The Coalition's Technical Director, Mike Rayner, came up to show us how his team is planning to optimize Gears 5 for Xbox Series X. The team showcased a technical demo of Gears 5, powered by Unreal Engine, for Xbox Series X using the full PC Ultra Spec settings, which included higher resolution textures and higher resolution volumetric fog, as well as a 50% higher particle count than the PC Ultra Specs allowed. They also showed off the opening cutscene, which now runs at 60 FPS in 4K (it was 30 FPS on Xbox One X), meaning the transition from real-time cutscenes to gameplay is incredibly smooth.

There were also some noticeable improvements in a few other areas as well. Load times were extremely fast, and the team was able to turn on some features that, while previously implemented, had to be turned off for the Xbox One X version. This included contact shadows (providing extra depth to objects) and self-shadow lighting on plants and grass, making every scene feel more realistic. Rayner also shared that the game is already running over 100 FPS and that the team is investigating implementing 120 FPS gameplay for multiplayer modes, giving players an experience never before seen on consoles. Most impressive of all? The fact that the team was able to get all of this up and running in a matter of weeks.


settings beyond ultra, cutscenes running at a fixed 4k60 (which cutscenes in gears 5 run at a capped framerate, thats normal) game running OVER 100fps, and they detail which settings are beyond ultra, or at least some of them.


and this is only done in a few weeks, so it hasnt even been optimized yet. So uh.... yeah..... 2080Ti does 60fps at 4k ultra in the gears benchmark, it runs around 100fps indoors and around 60 - 80 outdoors and in intense firefights, and this is doing better than that, with higher settings and no optimization. It bodes very well and makes comparisons to a 2080 a bit absurd. 2080 only does like 45fps at 4k ultra in gears 5 for reference.
 

eonden

Member
Oct 25, 2017
17,078
news.xbox.com

Xbox Series X: A Closer Look at the Technology Powering the Next Generation - Xbox Wire

A few months ago, we revealed Xbox Series X, our fastest, most powerful console ever, designed for a console generation that has you, the player, at its center. When it is released this holiday season, Xbox Series X will set a new bar for performance, speed and compatibility, all while allowing...





settings beyond ultra, cutscenes running at a fixed 4k60 (which cutscenes in gears 5 run at a capped framerate, thats normal) game running OVER 100fps, and they detail which settings are beyond ultra, or at least some of them.
That just says cutscenes are at 4k, in the same way that the Xbox One X has the cutscenes at 4k (but 30fps). And then we see something like Digital Foundry saying that there was similar perfomance to a 2080RTX. What we will see is a lower resolution with a pretty good scaler (which has improved a ton in the past gen)

Just a headsup of what jump you are talking when you say 4k100fps about because I am not sure you understand it:

unknown.png


4k 100 fps is *at best* 2 graphic cards generations away, and even then it makes no fucking sense right now because HDMI 2.1 only does 4k60fps. They were talking about getting to high refresh rates... for other resolutions.
 

TheNerdyOne

Member
Oct 28, 2017
521
That just says cutscenes are at 4k, in the same way that the Xbox One X has the cutscenes at 4k (but 30fps). And then we see something like Digital Foundry saying that there was similar perfomance to a 2080RTX. What we will see is a lower resolution with a pretty good scaler (which has improved a ton in the past gen)

Just a headsup of what jump you are talking when you say 4k100fps about because I am not sure you understand it:

unknown.png


4k 100 fps is *at best* 2 graphic cards generations away, and even then it makes no fucking sense because HDMI 2.1 only does 4k60fps. They were talking about getting to high refresh rates... for other resolutions.


uh..... HDMI 2.1 allows for 4k120 and beyond, and 8k60, its sorta the whole selling point of current gen LG oleds doing 4k120 adaptive sync properly.
furthermore, nowhere on that page does it say they're running at 100fps at lower resolutions, the entire context of that entire paragraph is 4k performance, and cutscenes are at a fixed framerate even on pc. I'm sure someone could probably tweet at the guy for clarification, but nowhere does he mention a lower resolution anywhere in there, the only resolution mentioned is 4k. and the settings are beyond ultra preset, a few of which he outlined in that very paragraph.
 

Rpgmonkey

Member
Oct 25, 2017
1,348
Unless I'm missing something there doesn't seem to actually be much disagreement here.

The RPS article posted in this thread says they were using 2080Ti's and noted some instability in the framerate. Nvidia notes that it's "roughly" 60FPS, whatever that means (it may have been above 60 but wasn't stable and had noticeable <60 drops, who knows). They also say their main target is the 2060, which possibly implies that they're targeting 1080p60 on the 2060. Then if you look at the game-debate article also posted in this thread it aligns with that information, which also says they're targeting 1080p60 on the 2060, with GPUs like the 2080 and 2080Ti being capable of higher resolutions and/or framerates.

So while there's not much hard data on how the build was running on a 2080Ti, it seems reasonable to say that either way Nvidia has reason to believe that it'll be running the game above 1080p60 by the final release. If I had to guess the XSX version is targeting 1080p60, but who knows, maybe it'll be more by release.
 

Lukas Taves

Banned
Oct 28, 2017
5,713
Brazil
news.xbox.com

Xbox Series X: A Closer Look at the Technology Powering the Next Generation - Xbox Wire

A few months ago, we revealed Xbox Series X, our fastest, most powerful console ever, designed for a console generation that has you, the player, at its center. When it is released this holiday season, Xbox Series X will set a new bar for performance, speed and compatibility, all while allowing...





settings beyond ultra, cutscenes running at a fixed 4k60 (which cutscenes in gears 5 run at a capped framerate, thats normal) game running OVER 100fps, and they detail which settings are beyond ultra, or at least some of them.


and this is only done in a few weeks, so it hasnt even been optimized yet. So uh.... yeah..... 2080Ti does 60fps at 4k ultra in the gears benchmark, it runs around 100fps indoors and around 60 - 80 outdoors and in intense firefights, and this is doing better than that, with higher settings and no optimization. It bodes very well and makes comparisons to a 2080 a bit absurd. 2080 only does like 45fps at 4k ultra in gears 5 for reference.
Holy shit. I missed the 100fps remark yesterday.

This console is insane omg
 

GrrImAFridge

ONE THOUSAND DOLLARYDOOS
Member
Oct 25, 2017
9,665
Western Australia
news.xbox.com

Xbox Series X: A Closer Look at the Technology Powering the Next Generation - Xbox Wire

A few months ago, we revealed Xbox Series X, our fastest, most powerful console ever, designed for a console generation that has you, the player, at its center. When it is released this holiday season, Xbox Series X will set a new bar for performance, speed and compatibility, all while allowing...





settings beyond ultra, cutscenes running at a fixed 4k60 (which cutscenes in gears 5 run at a capped framerate, thats normal) game running OVER 100fps, and they detail which settings are beyond ultra, or at least some of them.

That second paragraph strikes me as a little weaselly worded given it doesn't specify the game is running at 100+fps with the increased visual fidelity at native 4K. 100+fps at 1080p would be believable, though, and put it in line with what has been said about the rasterisation performance of the XSX GPU being comparable to the standard 2080.

We'll know soon enough, I suppose.
 

TheNerdyOne

Member
Oct 28, 2017
521
That second paragraph strikes me as a little weasily worded given it doesn't specify the game is running at 100+fps with the increased visual fidelity at native 4K. 100+fps at 1080p would be believable, though, and put it in line with what has been said about the rasterisation performance of the XSX GPU being comparable to the standard 2080.

We'll know soon enough, I suppose.

i suppose, but even if you dont take the 100+ fps part to be about 4k, which idk why you wouldnt since no other resolution was mentioned, but if you take the 4k60 at setting beyond ultra at face value, that's already faster than a 2080 because that only does 45fps on actual ultra. so a 2080 is a bit underselling the raster performance here IMO, especially considering theres no optimization here, its a few weeks of work, not a full development cycle on the port (so far).
 

eonden

Member
Oct 25, 2017
17,078
uh..... HDMI 2.1 allows for 4k120 and beyond, and 8k60, its sorta the whole selling point of current gen LG oleds doing 4k120 adaptive sync properly.
furthermore, nowhere on that page does it say they're running at 100fps at lower resolutions, the entire context of that entire paragraph is 4k performance, and cutscenes are at a fixed framerate even on pc. I'm sure someone could probably tweet at the guy for clarification, but nowhere does he mention a lower resolution anywhere in there, the only resolution mentioned is 4k. and the settings are beyond ultra preset, a few of which he outlined in that very paragraph.
You are right on HDMI 2.1, I always get it wrong.

There is 0% chance that Gears 5 runs at "beyond ultra" at 4k 100 fps. The gap between that and previous leader in performance is 2x in performance... which is 2 generations of graphic cards away. Just look at Digital Froundrys video:



Dictator himself has said the normal rasterization is around a 2080... which would make sense at 1080p 100+ fps.

That second paragraph strikes me as a little weasily worded given it doesn't specify the game is running at 100+fps with the increased visual fidelity at native 4K. 100+fps at 1080p would be believable, though, and put it in line with what has been said about the rasterisation performance of the XSX GPU being comparable to the standard 2080.

We'll know soon enough, I suppose.
 

Muhammad

Member
Mar 6, 2018
187
That second paragraph strikes me as a little weasily worded given it doesn't specify the game is running at 100+fps with the increased visual fidelity at native 4K. 100+fps at 1080p would be believable, though, and put it in line with what has been said about the rasterisation performance of the XSX GPU being comparable to the standard 2080.
Yup, it doesn't state 4K 100fps, it just states 100fps, which could be any resolution really, maybe 1440p upscaled.
 

eonden

Member
Oct 25, 2017
17,078
i suppose, but even if you dont take the 100+ fps part to be about 4k, which idk why you wouldnt since no other resolution was mentioned, but if you take the 4k60 at setting beyond ultra at face value, that's already faster than a 2080 because that only does 45fps on actual ultra. so a 2080 is a bit underselling the raster performance here IMO, especially considering theres no optimization here, its a few weeks of work, not a full development cycle on the port (so far).
A cutscene is very different than real game performance, as many of the more complex calculations are turned off. The Xbox One X runs cutscenes at 4k30fps for instance...
 

Muhammad

Member
Mar 6, 2018
187
no, i'm absolutely certain that on paper it can do ~3.6x as many intersections per second.
Please stop, you just picked a common multiplayer and claiemd it's industry standard 10 BVH depth, which is a blatant lie, ONCE MORE, NVIDIA doesn't use BVH depth to calculate Gigarays. Read their whitepaper for God's sake.
 

GrrImAFridge

ONE THOUSAND DOLLARYDOOS
Member
Oct 25, 2017
9,665
Western Australia
i suppose, but even if you dont take the 100+ fps part to be about 4k, which idk why you wouldnt since no other resolution was mentioned, but if you take the 4k60 at setting beyond ultra at face value, that's already faster than a 2080 because that only does 45fps on actual ultra. so a 2080 is a bit underselling the raster performance here IMO, especially considering theres no optimization here, its a few weeks of work, not a full development cycle on the port (so far).

I don't discount the possibility I'm viewing things through an unfairly cynical lens, but it just seems strange to me that the 100+fps frame rate is mentioned as an "improvement in [another] area" after performance and visual fidelity improvements were already mentioned. It feels like there's missing context.
 

TheNerdyOne

Member
Oct 28, 2017
521
Please stop, you just picked a common multiplayer and claiemd it's industry standard 10 BVH depth, which is a blatant lie, ONCE MORE, NVIDIA doesn't use BVH depth to calculate Gigarays. Read their whitepaper for God's sake.

except you're quoting a statement about intersections per second, not gigarays per second. Second, their whitepaper details their made up RTX ops figure, not their gigaray figure or how they arrive at it. So you're wrong on both counts, and still being hostile towards me for no reason sir.


I don't discount the possibility I'm viewing things through an unfairly cynical lens, but it just seems strange to me that the 100+fps frame rate is mentioned as an "improvement in [another] area" after performance and visual fidelity improvements were already mentioned. It feels like there's missing context.
There's definitely missing context, but all of the direct statements we have say its at the very least running faster than a 2080 does, at higher settings than pc is even capable of running at. Maybe someone who actually uses twitter should tweet at him and see if we can get clarification or not.
 

Muhammad

Member
Mar 6, 2018
187
except you're quoting a statement about intersections per second, not gigarays per second. Second, their whitepaper details their made up RTX ops figure, not their gigaray figure or how they arrive at it.
They state their GigaRay figure is based off the combination of FP32 + INT/FP execution + RT cores, there is no mention whatsoever of your claimed BVH depth, which you are still failing to give us a link for in the first place, you can't even prove 10 depth is the industry standard.
 

TheNerdyOne

Member
Oct 28, 2017
521
They state their GigaRay figure is based off the combination of FP32 + INT/FP execution + RT cores, there is no mention whatsoever of your claimed BVH depth, which you are still failing to give us a link for in the first place, you can't even prove 10 depth is the industry standard.

lxuJDRZ.png


funny, i had no trouble finding the bit about BVH depth level and how they handle traversal and intersect testing in the whitepaper.

Also, nowhere in the entire whitepaper are gigarays even mentioned sir, they detail RTX-OPS, which have nothing to do with anything whatsoever and also aren't how they arrive at a rays/second figure or intersects/second figure at all. Did YOU read the whitepaper?
 

dgrdsv

Member
Oct 25, 2017
11,846
My formula straight up matches the fucking data both companies advertised
Since I didn't really get a reply last time let me try and guess this "formula" of yours: you're basing this on the fact that there's only one RT core in Turing's SM but four TMUs in Navi's CU? What makes you think that the BVH testing h/w in RDNA2 is tied to each of four TMUs?

JFYI: BVH intersection testing stage is not the main limiting performance factor of current gaming hybrid raster+RT pipeline implementations. Turing shows gains of about 6-10x over running this on CUDA cores when it is and this is nowhere near what you get in DXR/RTX games presently. Going with 4X BVH traversal performance on 12 Tflops of compute and less than 1 GB/s of bandwidth makes absolutely zero sense for a gaming GPU, let alone a console GPU. This would essentially be useless hardware.
 

TheNerdyOne

Member
Oct 28, 2017
521
Since I didn't really get a reply last time let me try and guess this "formula" of yours: you're basing this on the fact that there's only one RT core in Turing's SM but four TMUs in Navi's CU? What makes you think that the BVH testing h/w in RDNA2 is tied to each of four TMUs?

JFYI: BVH intersection testing stage is not the main limiting performance factor of current gaming hybrid raster+RT pipeline implementations. Turing shows gains of about 6-10x over running this on CUDA cores when it is and this is nowhere near what you get in DXR/RTX games presently. Going with 4X BVH traversal performance on 12 Tflops of compute and less than 1 GB/s of bandwidth makes absolutely zero sense for a gaming GPU, let alone a console GPU. This would essentially be useless hardware.

Yes my formula only calculates the theoretical number of intersects per second you can do, and it is an accurate figure. It wont translate at all into real world RT performance mind you, but then, i said on paper XSX is ~4x 2080Ti, i didn't say in the real world its guaranteed to be. on paper it is, in at least one area. and yes i rounded 3.65x to 4x sorry. Thus far nobody who has said i'm wrong has even tried to provide a different formula for calculating peak intersection rate, or converting that data in to peak rays/second. my formula arrives at known data, and then it also arrives at rays/second data for nvidia gpus 100% on the nose, the math works for every single turing gpu you try it on, and we know it works for the XSX because microsoft was kind enough to tell us the raw intersect figure of 380 billion.


As for what makes me think each TMU also has a full RT core in it? Because if it doesn't, then you can't even arrive at the 380 billion intersects/second figure microsoft officially stated. 208 RT cores (and 208 TMUs) and the known 1825mhz clockspeeds gets you 380 billion/sec. So uh, we know that part is accurate at least.
 

Tora

The Enlightened Wise Ones
Member
Jun 17, 2018
8,639
i suppose, but even if you dont take the 100+ fps part to be about 4k, which idk why you wouldnt since no other resolution was mentioned, but if you take the 4k60 at setting beyond ultra at face value, that's already faster than a 2080 because that only does 45fps on actual ultra. so a 2080 is a bit underselling the raster performance here IMO, especially considering theres no optimization here, its a few weeks of work, not a full development cycle on the port (so far).
You're getting greedy with what to expect lol, the fact that it's even matching a 2080 is insane, don't push it to comparing against a 1200 dollar card.

This also confirms how stupidly overpriced GPU's currently are.
 

plagiarize

Eating crackers
Moderator
Oct 25, 2017
27,511
Cape Cod, MA
A cutscene is very different than real game performance, as many of the more complex calculations are turned off. The Xbox One X runs cutscenes at 4k30fps for instance...
Cutcenes are more demanding in Gears 5 than general gameplay. If, and that's a big if, the Series X is running the game at 4K 60 in cutscenes, then 4K 100fps would be very plausible.

Again, if.

Since the game on X runs at near 4K 60 with medium settings, I don't see why it couldn't hit 4K 100 on ultra (vs insane).

What sort of bump would people expect, just based on specs?
 

TheNerdyOne

Member
Oct 28, 2017
521
Cutcenes are more demanding in Gears 5 than general gameplay. If, and that's a big if, the Series X is running the game at 4K 60 in cutscenes, then 4K 100fps would be very plausible.

Again, if.

Since the game on X runs at near 4K 60 with medium settings, I don't see why it couldn't hit 4K 100 on ultra (vs insane).

What sort of bump would people expect, just based on specs?

its beyond ultra, read the blog post, particle count is 50% higher, texture res is higher, they added new effects, and a few other things are higher than you can do on pc full stop. So 4k60 cutscenes beyond ultra settings where you cant even hit 4k60 in cutscenes with a 2080 at all and a 2080Ti barely can. says a lot IMO.
 

plagiarize

Eating crackers
Moderator
Oct 25, 2017
27,511
Cape Cod, MA
its beyond ultra, read the blog post, particle count is 50% higher, texture res is higher, they added new effects, and a few other things are higher than you can do on pc full stop. So 4k60 cutscenes beyond ultra settings where you cant even hit 4k60 in cutscenes with a 2080 at all and a 2080Ti barely can. says a lot IMO.
A number of the PC settings have a level beyond ultra, called insane. I'd imagine they'd reference that if it was using those settings.

But again, like I said, IF it's doing 4K 60 in cutscenes, 100 fps in gameplay at 4K is completely plausible.
 

Lukas Taves

Banned
Oct 28, 2017
5,713
Brazil
That second paragraph strikes me as a little weaselly worded given it doesn't specify the game is running at 100+fps with the increased visual fidelity at native 4K. 100+fps at 1080p would be believable, though, and put it in line with what has been said about the rasterisation performance of the XSX GPU being comparable to the standard 2080.

We'll know soon enough, I suppose.
Considering it's running at a locked 4k60 on SX (which means 60 is the minimum framerate) I doubt it would run at only 100fps @1080p or that they would have any problems at all to get to 1080p 120.

Also the X already runs the game at 4k60fps, dynamic but always relatively close to 4k, it's not that outlandish that a gpu more than twice as strong would push less than double the framerate considering its pushing native resolution at all times and higher settings.
 

Muhammad

Member
Mar 6, 2018
187
funny, i had no trouble finding the bit about BVH depth level and how they handle traversal and intersect testing in the whitepaper.
Where is the depth level of 10 you claim?


Also, nowhere in the entire whitepaper are gigarays even mentioned sir, they detail RTX-OPS, which have nothing to do with anything whatsoever and also aren't how they arrive at a rays/second figure or intersects/second figure at all. Did YOU read the whitepaper?
WRONG, search Giga Ray and you will find dozens of references to it. Apparently you just skipped over like you always do, a sign of someone who doesn't do accurate math.
 

dgrdsv

Member
Oct 25, 2017
11,846
Yes my formula only calculates the theoretical number of intersects per second you can do, and it is an accurate figure.
Not really. There is no sign of each of RDNA2 TMU containing a dedicated BVH unit inside them.
Since TMUs are usually designed in quads logic points to there actually being just one BVH unit per a quad TMU unit.
This unit is also likely completely decoupled from the TMUs themselves and its positioning inside the logical and/or physical TMU unit is down to its reuse of data from texture cache - since it does need a lot of cache or local memory, this is the most expensive part of an RT acceleration h/w.

Then you'd have to take into account the simple fact that we don't really know anything about NV's RT core. There was a reverse engineering research recently which suggested that it is accessed "through TMUs" on Turing as well, for example. It's also impossible to say how many "intersects per second" this core can do since this is absolutely dependent on the BLAS which is build by s/w - so it's down to programmers hands really. Saying "intersects per second" is like saying "shaders per second" - it's meaningless metric since shaders can be different, and intersections can be different as well, in both their type and numbers.

And then you have to account for the fact that right now we know even less about RDNA2's "RT core".

my formula arrives at known data, and then it also arrives at rays/second data for nvidia gpus 100% on the nose
That's because this data is based on the same basics of the number of RT cores in each of said GPUs. Again, this means nothing really.

Seems that it can't be stressed enough that BVH testing is just one part of RT pipeline and you do need a lot of FP32 compute power for the other parts of this same pipeline. This alone makes any X gains in just BVH testing totally pointless, you'd need to provide these gains across the whole of the pipeline for them to make sense, and this means 4X of everything really: RT cores, FP32 TFLOPs, cache sizes, memory bandwidths.
 

TheNerdyOne

Member
Oct 28, 2017
521
Where is the depth level of 10 you claim?



WRONG, search Giga Ray and you will find dozens of references to it. Apparently you just skipped over like you always do, a sign of someone who doesn't do accurate math.

the only time that is mentioned is to say "we can do 10 of these" "pascal can do 1.1", it doesn't reference it at all with regards to how they arrive at it, nor does their RTX ops section mention it at all, but good job reaching there, buddy.
 

Muhammad

Member
Mar 6, 2018
187
Thus far nobody who has said i'm wrong has even tried to provide a different formula for calculating peak intersection rate, or converting that data in to peak rays/second.
Because it's unknown at this point. You are the only one claiming to know it!
my formula arrives at known data, and then it also arrives at rays/second data for nvidia gpus 100% on the nose
Because you picked a common multiplayer and ran with it.
 

TheNerdyOne

Member
Oct 28, 2017
521
Because it's unknown at this point. You are the only one claiming to know it!

Because you picked a common multiplayer and ran with it.

apparently microsoft at the very least ran with it too, because they quote exactly the same figure as my formula produces for intersections per second. But apparently that isn't good enough, no, i have to have nvidia confidential data and be able to link to said data to back up my claim even though publicly available data from microsoft already does that.
 

Muhammad

Member
Mar 6, 2018
187
the only time that is mentioned is to say "we can do 10 of these" "pascal can do 1.1", it doesn't reference it at all with regards to how they arrive at it, nor does their RTX ops section mention it at all, but good job reaching there, buddy.
Yes indeed it DOES NOT, which means they don't really use BVH depth as you claimed. You are the only one claiming to know a "formula" that NVIDIA doesn't even mention in the first place.

Sigh:

Here they clearly say their GigaRay figure is based on the number of compute tera ops of ray tracing:

Ray Tracing is about half of the FP32 shading time. In Pascal, ray tracing is emulated in software on CUDA cores, and takes about 10 TFLOPs per Giga Ray, while in Turing this work is performed on the dedicated RT cores, with about 10 Giga Rays of total throughput or 100 tera-ops of compute for ray tracing.

 

TheNerdyOne

Member
Oct 28, 2017
521
Yes indeed it DOES NOT, which means they don't really use BVH depth as you claimed. You are the only one claiming to know a "formula" that NVIDIA doesn't even mention in the first place.

Sigh:

Here they clearly say their GigaRay figure is based on the number of compute tera ops of ray tracing:




clearly not the same thing as their RTX OPS then, else 10 gigarays is a straight up lie. if 10 gigarays = 100 RTX-OPS, then the 2080Ti is only capable of 7.8 gigarays per nvidia's own weird formula for RTX ops. They're saying 1 gigaray is equivalent to 10 TOPS of compute, not 10 RTX-OPS, their RTX-OPS figure is meaningless and has nothing to do with anything as far as i can tell. Basically what you found means literally nothing.
 

RivalGT

Member
Dec 13, 2017
6,393
Gears 5 settings must be dynamic, because the game is actually really demanding if you turn that off on the PC, even a 2080 Ti will struggle at 4K, and most of the time the frame rate will be bellow 60 fps. I think its the minimum framerate option, if set to 60 the game was smooth.
 

Muhammad

Member
Mar 6, 2018
187
clearly not the same thing as their RTX OPS then, else 10 gigarays is a straight up lie. if 10 gigarays = 100 RTX-OPS, then the 2080Ti is only capable of 7.8 gigarays per nvidia's own weird formula for RTX ops. They're saying 1 gigaray is equivalent to 10 TOPS of compute, not 10 RTX-OPS, their RTX-OPS figure is meaningless and has nothing to do with anything as far as i can tell. Basically what you found means literally nothing.
Basically you are dancing around the issue of flawed math .. NVIDIA doesn't use your formula to calculate anything, they use primary rays in a standard bunny test, with tera ops of ray tracing as their basis of math, their 10 Giga Rays figure is just that, marketing. You can't compare that to the figure Microsoft used (Intersection tests).

The only valid comparison points is the TF number because it's universal, Microsoft claimed 13 TF for the Series X, NVIDIA claimed 25TF for the 2080, hence why NVIDIA has more RT performance than RDNA2, which is showing in the Minecraft demo.

geforce-rtx-gtx-dxr-metro-exodus-rtx-rt-core-dlss-frame-expanded-850px.png
 

Muhammad

Member
Mar 6, 2018
187
Yes my formula only calculates the theoretical number of intersects per second you can do

That's because this data is based on the same basics of the number of RT cores in each of said GPUs. Again, this means nothing really.

Seems that it can't be stressed enough that BVH testing is just one part of RT pipeline and you do need a lot of FP32 compute power for the other parts of this same pipeline. This alone makes any X gains in just BVH testing totally pointless, you'd need to provide these gains across the whole of the pipeline for them to make sense, and this means 4X of everything really: RT cores, FP32 TFLOPs, cache sizes, memory bandwidths.
Exactly, there is also INT32 in the case of Turing.

Not really. There is no sign of each of RDNA2 TMU containing a dedicated BVH unit inside them.
Since TMUs are usually designed in quads logic points to there actually being just one BVH unit per a quad TMU unit.
This unit is also likely completely decoupled from the TMUs themselves and its positioning inside the logical and/or physical TMU unit is down to its reuse of data from texture cache - since it does need a lot of cache or local memory, this is the most expensive part of an RT acceleration h/w.

Then you'd have to take into account the simple fact that we don't really know anything about NV's RT core. There was a reverse engineering research recently which suggested that it is accessed "through TMUs" on Turing as well, for example. It's also impossible to say how many "intersects per second" this core can do since this is absolutely dependent on the BLAS which is build by s/w - so it's down to programmers hands really. Saying "intersects per second" is like saying "shaders per second" - it's meaningless metric since shaders can be different, and intersections can be different as well, in both their type and numbers.

And then you have to account for the fact that right now we know even less about RDNA2's "RT core".
He doesn't account for anything except his recently discovered "formula". He is too fixated on his claimed math, that he ignores basic rendering technical knowledge.
 

TheNerdyOne

Member
Oct 28, 2017
521
Basically you are dancing around the issue of flawed math .. NVIDIA doesn't use your formula to calculate anything, they use primary rays in a standard bunny test, with tera ops of ray tracing as their basis of math, their 10 Giga Rays figure is just that, marketing. You can't compare that to the figure Microsoft used (Intersection tests).

The only valid comparison points is the TF number because it's universal, Microsoft claimed 13 TF for the Series X, NVIDIA claimed 25TF for the 2080, hence why NVIDIA has more RT performance than RDNA2, which is showing in the Minecraft demo.

geforce-rtx-gtx-dxr-metro-exodus-rtx-rt-core-dlss-frame-expanded-850px.png

sorry where are nvidia coming up with the 23TF figure? they show 23TF for a snippet of a frame in an unknown game with unknown settings, so show me precisely how nvidia arrived at 23TF for the 2080's RT. You keep saying that microsofts intersect tests figure isn't comrpable to nvidia's gigarays figure and you're right. but nvidia's intersect tests figure is fair game and XSX can do 3.65x more intersect tests per second than the 2080Ti. Now that that's cleared up, show me EXACTLY how nvidia is calculating the TF figure for the 2080's RT cores, or kindly shut up about it. I'm getting real tired of you running off at the mouth claiming i'm wrong without actually providing any tangible hard math to back it up.
 

Muhammad

Member
Mar 6, 2018
187
You keep saying that microsofts intersect tests figure isn't comrpable to nvidia's gigarays figure and you're right.
Glad we got that out of the way.

but nvidia's intersect tests figure is fair game and XSX can do 3.65x more intersect tests per second than the 2080Ti
!!!!
Please kindly show me how many NVIDIA intersection test are there in a Turing GPU?
Now that that's cleared up, show me EXACTLY how nvidia is calculating the TF figure for the 2080's RT cores, or kindly shut up about it. I'm getting real tired of you running off at the mouth claiming i'm wrong without actually providing any tangible hard math to back it up.
That one is easy: NVIDIA takes the number of RT cores X GPU frequency , then divides the number by a factor to account for the role of RT cores in each frame.

So in the case of the RTX 2080: we have 46 RT cores x 1740MHz frequency, which would give you 80 TFLOPS of RT math, then they divide that number by a factor of 3.47 to arrive at the quoted 23 TF of traditional compute.

geforce-rtx-gtx-dxr-metro-exodus-rtx-rt-core-dlss-frame-expanded-850px.png



Microsoft did calculate their TF number and came up with the 13TF figure, NVIDIA did the same thing before them and came up with the 25TF figure, the burden is on Microsoft to prove their solution is faster, which they don't claim, ONLY you are claiming that.

And In real world workloads, the difference is obvious, clearly series x has less RT performance than a 2080 as shown in MineCraft.
 

TheNerdyOne

Member
Oct 28, 2017
521
Glad we got that out of the way.


!!!!
Please kindly show me how many NVIDIA intersection test are there in a Turing GPU?

That one is easy: NVIDIA takes the number of RT cores X GPU frequency , then divides the number by a factor to account for the role of RT cores in each frame.

So in the case of the RTX 2080: we have 46 RT cores x 1740MHz frequency, which would give you 80 TFLOPS of RT math, then they divide that number by a factor of 3.47 to arrive at the quoted 23 TF of traditional compute.

geforce-rtx-gtx-dxr-metro-exodus-rtx-rt-core-dlss-frame-expanded-850px.png



Microsoft did calculate their TF number and came up with the 13TF figure, NVIDIA did the same thing before them and came up with the 25TF figure, the burden is on Microsoft to prove their solution is faster, which they don't claim, ONLY you are claiming that.

And In real world workloads, the difference is obvious, clearly series x has less RT performance than a 2080 as shown in MineCraft.


if you do the same math on XSX you come up with 109TF, not 13 or 23. so clearly that isn't how microsoft is doing its math.

208 times 1825 gives you 379.6 Tflops by your formula, divide by 3.47 like nvidia did and uh? 109.34TF? so where's the discrepency?


furthermore, minecraft RT runs at less than 60fps at 1080p on a 2080Ti per every video we have of the software running, so until you can provide framerate counted video countering that claim, as well as nvidia's direct quote that it is below 60fps on a 2080Ti, you're quite clearly wrong on that one.
Better yet, show me how minecraft ran on a 2080Ti with 1 dude working on it by himself for 4 weeks. until you an do that, you can't provide an apples to apples comparison using raytraced minecraft, because the size of the team, the amount of man hours, money, and optimization done is lightyears apart. And even if it weren't, the 2080Ti doesn't do over 60fps in it in any video we presently have for it, and nvidia themselves are straight up on record saying its sub 60fps on a 2080Ti.
 
Last edited:
Status
Not open for further replies.