But they don't ignore RTX. Never have, in fact just today there is a RTX + DLSS review with Cyberpunk, which means only Nvidia cards. They just, in their rightful opinion, say that RT is still not more important than overall performance.
They either don't include it in their benchmarks at all or include just a couple of games out of 30 or so released thus far. And while they do benchmark these three games they are constantly saying how it doesn't worth it in their opinion. This looks like ignoring it to me.
Them ignoring Cyberpunk would be quite a feat for sure. Thankfully they know how to do business and thus it didn't happen.
Nvidia with their wording is actually trying to editorialize that opinion. That's the problem here
So how is it going so far? Have they already changed their opinion now as they don't have review samples from NV anymore?
And what is the problem exactly? Them not getting reference review samples means that their reviews will come out later than from those who will. That's a loss for both them and Nv.
I think that both parties are at fault for this situation but if someone wants to get mad at one commercial company cutting ties with another commercial company then it's their right of course.
1. Subjective opinion, not a fact. Just because you do not share it does not mean it's wrong. SAM is indeed some very exciting tech that has remained exclusive to consoles for years.
When you review products you don't really have a luxury of saying "subjective opinions" unless you are showing data which back them up at least somehow.
SAM hasn't been exclusive to consoles at all btw, not sure where you get that.
2. You are misconstruing what they said. From the 3080 review: "[...] enabling RTX [on the 3080] did reduce FPS by 41%, which is very reasonable performance at 4K but also a massive FPS drop. For comparison, the RTX 2080Ti saw a 49% drop so not a huge difference there in the margin. The same is also true when using DLSS..."
He did NOT say RTX performance is the same, he said the raw computational COST of RTX is SIMILAR.
I'm not "misconstructing" anything.
https://www.techspot.com/article/2109-nvidia-rtx-3080-ray-tracing-dlss/ said:
Our tests also show that ray tracing acceleration in the RTX 3080 isn't overly better than the 2080 Ti's, with less than a 10% speed up to RT acceleration separating the two. In a best-case scenario like Wolfenstein: Youngblood at 4K which received an Ampere-specific patch to improve performance, the RTX 3080 is 25% better at ray tracing acceleration. Ideally, we'd need to see a 50% or even 100% improvement to RT acceleration before the RTX on and off gap feels more acceptable with today's effects.
Because Ampere is only a minor improvement over Turing for acceleration, there's still question marks over whether this card will be sufficient for the next few years of ray tracing. Most of today's games use only one or two effects, and use them sparingly. If games start going all out on ray tracing effects – like what we see with Fortnite – the gap between RTX on and off will grow substantially. More acceleration with more powerful RT cores will be required to keep up. Of course, this is all speculation about the future, so we'll have to see what happens with the next few years of games.
This is a number of false claims based on the lack of understanding of how RT functions in modern GPUs.
And in retrospect them saying this about Ampere has collided nicely with them saying how RDNA2 RT will get better once s/w will be optimized for it.
3. What they have said is that Ampere does not see the same jump in performance in lower resolutions as it does in 4K, the latter of which has a higher performance margin compared to Turing. They are not the first to say this, and they have charts proving their point. They even mention that it varies by game.
a) When a card gives better results in higher resolutions it means _better_ scaling with resolutions, not worse.
b) What Ampere shows in lower resolutions is totally dependent on the benchmarking suite which is used. Them using about 1/3 to 1/2 of heavily AMD optimized games mean that all NV h/w hits CPU/system limitations way sooner than they should. This will lead to worse results in lower resolutions. Lower resolutions are always mostly CPU and platform bound, and this is where you should look for the reason, not in "FP32" like they did.
c) Their reasoning on why this is the case was mostly "it's something with the Ampere architecture" and "double FP32 helps in higher resolutions" while the actual two reasons are their benchmarking suite and Ampere's advantage in raw memory bandwidth allowing it to scale better in higher resolutions.
So that was handled pretty badly by them overall, leading to people on the net saying how Ampere have issues with scaling.
I really don't think you fully understand what pressuring means.
I fully understand that pressuring means. And as I've said multiple times already I fully agree that the way NV handled this was stupid and unacceptable. But mostly because in contrast to what you suggest I don't see how this will pressure HWU into anything at all. The result will likely be the opposite of what NV would want to pressure them into, and I fully expect HWU to double down on their stance of RTX/RT/DLSS not being relevant now. For me this is a loss as I would very much like them to provide proper benchmarks of RT in all games which have it. Hence my disappointment with all parties here.