• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.
  • We have made minor adjustments to how the search bar works on ResetEra. You can read about the changes here.

dgrdsv

Member
Oct 25, 2017
11,879
not saying it might not be good product, I dont think 5700 cards were bad either, but the jumbo before the launch was over the top once again
This time they are very quiet actually so there's a chance that they won't overpromise and underdeliver. Also they should be at both feature and performance parity with NV for the first time since 2014 really.
 

TC McQueen

Member
Oct 27, 2017
2,592
not saying it might not be good product, I dont think 5700 cards were bad either, but the jumbo before the launch was over the top once again
To be fair, considering that AMD's old marketing team - which got axed and moved over to Intel - was the source of many leaks for the architectures leading up to that release, I'm not surprised that the hype went out of control. The marketing guys were probably intentionally releasing info to get free marketing and consumer attention.
 

asd202

Enlightened
Member
Oct 27, 2017
9,557
If AMD does not have an answer to DLSS I don't see myself getting it in the future.
 

tokkun

Member
Oct 27, 2017
5,406
If AMD does not have an answer to DLSS I don't see myself getting it in the future.

I'm sure it is (and will continue to be) an active research area, considering that consoles are going to using RDNA2 for many years.

I can see two different paths here:

A. People figure out how to improve FXAA similarly without relying a neural network for reconstruction and instead use algorithms that can be accelerated on the compute units they have available.
B. It could turn out that we can get most of the benefit using a simpler neural network that could feasibly run without the need for tensor cores.
 

gozu

Member
Oct 27, 2017
10,331
America
Oh please don't start these marketing posts for a technology that Nvidia is using exclusively... they don't need your help (their market gap overtook Intel) and the market needs competition.

DLSS 2.0 is great but it's not without flaws. The main problem is it is proprietary, very few games use it or will use it.

Reminds me of Voodoo GPUs and the glide api which was proprietary to 3DFX and provided a clear boost over direct3D. Banshee and Voodoo 2 cards were kings and nVidia was a contender. Eventually 3DFX got a big head and stopped selling chips to partners (Monster, Guillemot, etc.) after acquiring STB. Nvidia gobbled them up shortly after.
 

Jroc

Banned
Jun 9, 2018
6,145
For the past 15 years I've hopped back and forth between AMD/ATI and Nvidia.

X800 GTO
8800GT
4890 Crossfire
GTX670
RX480

I plan on getting something from the Nvidia 3000 series, but if the prices are fucked in Canada and AMD let's me go up a performance tier then maybe I'll break the cycle. I hope an open standard equivalent to DLSS takes off. It sucks when something important is vendor-locked.

Very happy that Nvidia eventually caved when it came to Freesync.
 

brain_stew

Member
Oct 30, 2017
4,731
Do we know anything about AMD's hardware raytracing implementation?

We had Microsoft's Series X demo of Minecraft RTX that performed roughly on par with an RTX 2060. I personally expect RT performance to be more inline with first generation RTX cards, which could leave them quite far behind Nvidia's offerings if we see the big leap in RT performance many are expecting.

With Cyberpunk being the biggest PC release this year and supporting the full suite of RT effects plus DLSS 2.0, that's not going to play well for AMD.
 
Last edited:
OP
OP
ILikeFeet

ILikeFeet

DF Deet Master
Banned
Oct 25, 2017
61,987
We had Microsoft's Series X demo of Minecraft RTX that performed rightly on par with an RTX 2060. I personally expect RT performance to be more inline with first generation RTX cards, which could leave them quite far behind Nvidia's offerings if we see the big leap in RT performance many are expecting.

With Cyberpunk being the biggest PC release this year and supporting the full suite of RT effects plus DLSS 2.0, that's not going to play well for AMD.
the Series X on par with a 2060 would be pretty damn disappointing, but the Minecraft demo was extremely early
 

brain_stew

Member
Oct 30, 2017
4,731
the Series X on par with a 2060 would be pretty damn disappointing, but the Minecraft demo was extremely early

I find it impressive that we have hardware ray tracing in these consoles at all.

I would agree that simply matching first generation RTX performance in ray tracing isn't going to be good enough in the PC space. Especially, given the lack of Tensor cores.
 

Smashed_Hulk

Member
Jun 16, 2018
401
the Series X on par with a 2060 would be pretty damn disappointing, but the Minecraft demo was extremely early
The XSX is far more powerful than a 2060.

For reference, the XSX Gears 5 quick demo port performed at around the same as a pc with a 2950x + 2080ti.
Now this doesnt mean the gpu is 2080ti level in the xsx, but a better comparison thrown around for the xsx gpu is 2080s level
 
OP
OP
ILikeFeet

ILikeFeet

DF Deet Master
Banned
Oct 25, 2017
61,987
The XSX is far more powerful than a 2060.

For reference, the XSX Gears 5 quick demo port performed at around the same as a pc with a 2950x + 2080ti.
Now this doesnt mean the gpu is 2080ti level in the xsx, but a better comparison thrown around for the xsx gpu is 2080s level
I'm referring to the the minecraft demo, which is pathtraced, and (very) early performance is on par with a 2060. I would hope they can bump that up with more dev time into it

I find it impressive that we have hardware ray tracing in these consoles at all.

I would agree that simply matching first generation RTX performance in ray tracing isn't going to be good enough in the PC space. Especially, given the lack of Tensor cores.
I wonder if jumping on the bandwagon early with Navi 1X would have done AMD some good. I figure getting an early run allowed Nvidia to make changes for the better
 

ThatNerdGUI

Prophet of Truth
Member
Mar 19, 2020
4,550
The recent horrendous RX 5xxx black screen crashing event aside, that hasn't really been true at all. A few years ago they started developing big end of year updates, asking the community what features they want and (mostly) delivering, half a year back they launched a redesigned driver control panel that really makes nVidia's look antiquated by comparison.

Both companies have had awful drivers - a few years back you had nVidia drivers apparently actually killing cards. AMD has had a bad reputation for their drivers but they have been steadily improving them for some time now and granted, the recent black screen crash issue was awful, absolutely terrible PR, but that mostly seems to have been resolved now, thankfully.

I guess my friends and I are the lucky ones as we've been on mostly AMD cards for about a decade now and we've never really had many issues with them. The most recent - and egregious - would be the issue mentioned above, but of my friends only I had it and 19.12.1 was luckily solid as a rock for me.

My old Radeon VII says otherwise. There's people still having issues with drivers issues with those cards too. Have they gotten better? sure, but they still have plenty of issues compared to Nvidia drivers. On top of that Nvidia software is way beyond AMD.
 

Tovarisc

Member
Oct 25, 2017
24,427
FIN
Nvidia on 8nm might have something to say about that

People keep reading WAY too much into 7nm Vs. 8nm when it comes to power consumption and heat generation.

if AMD ends up out performing NV in this cycle it wont be because of minor difference in node process.

I wonder if jumping on the bandwagon early with Navi 1X would have done AMD some good. I figure getting an early run allowed Nvidia to make changes for the better

NV has developed basically its own processing units just for DLSS and ray tracing. Do we know how AMD is going about it, using GPUs main core for rasterization and ray tracing simultaneously?
 

eonden

Member
Oct 25, 2017
17,084
About AMD being 7nm while Nvidia is "only" 8nm. Nvidia is currently 10nm while AMD is 7nm and Nvidia demolishes AMD in performance and performance per watt. The architecture is very important, only focusing on node size is a bit stupid..
 

Lukas Taves

Banned
Oct 28, 2017
5,713
Brazil
We had Microsoft's Series X demo of Minecraft RTX that performed roughly on par with an RTX 2060. I personally expect RT performance to be more inline with first generation RTX cards, which could leave them quite far behind Nvidia's offerings if we see the big leap in RT performance many are expecting.

With Cyberpunk being the biggest PC release this year and supporting the full suite of RT effects plus DLSS 2.0, that's not going to play well for AMD.
It didn't performed on par with a 2060. 1080p at "well above 30fps, but not locked to 60" is basically the 2080 performance on it.

You are probably mixing with the reconstructed dlss resolution
 

Clessidor

Member
Oct 30, 2017
260
NV has developed basically its own processing units just for DLSS and ray tracing. Do we know how AMD is going about it, using GPUs main core for rasterization and ray tracing simultaneously?
I'm pretty sure there was some patent, people were talking about a year ago. Mostly speculation tbh. Int the end I think, we have to wait for official information to know the actual hardware solution AMD is going for. And for that we probably have to wait at least for the official reveal.
 

Twenty7kvn

The Fallen
Oct 25, 2017
1,749
the Series X on par with a 2060 would be pretty damn disappointing, but the Minecraft demo was extremely early
I find it impressive that we have hardware ray tracing in these consoles at all.

I would agree that simply matching first generation RTX performance in ray tracing isn't going to be good enough in the PC space. Especially, given the lack of Tensor cores.
Well, you can look at it this way. Console devs are going to find ways to better optimize their ray tracing performance on consoles and those optimizations will find there way up to PC.
 

Tovarisc

Member
Oct 25, 2017
24,427
FIN
I'm pretty sure there was some patent, people were talking about a year ago. Mostly speculation tbh. Int the end I think, we have to wait for official information to know the actual hardware solution AMD is going for. And for that we probably have to wait at least for the official reveal.

That is maybe the thing I'm interested about in this GPU cycle. What is AMDs RT calculation solution and how it compares to NVs?
 
OP
OP
ILikeFeet

ILikeFeet

DF Deet Master
Banned
Oct 25, 2017
61,987
People keep reading WAY too much into 7nm Vs. 8nm when it comes to power consumption and heat generation.

if AMD ends up out performing NV in this cycle it wont be because of minor difference in node process.
it's less about node and more about how it's sounding like Nvidia is boosting clocks quite a bit. 10/8nm is still more dense than 16/12nm after all. AMD already gave us expectations for RDNA2, Nvidia hasn't
 

Deleted member 16908

Oct 27, 2017
9,377
For reference, the XSX Gears 5 quick demo port performed at around the same as a pc with a 2950x + 2080ti.
Now this doesnt mean the gpu is 2080ti level in the xsx, but a better comparison thrown around for the xsx gpu is 2080s level

I don't know where you read this. Digital Foundry said that when MS showed them the Gears 5 demo, it ran slightly worse on Series X than it does on a RTX 2080 (non-Super). This places the Series X at roughly 2070 Super-level performance, at least for that particular game.
 

eonden

Member
Oct 25, 2017
17,084
I don't know where you read this. Digital Foundry said that when MS showed them the Gears 5 demo, it ran slightly worse on Series X than it does on a RTX 2080 (non-Super). This places the Series X at roughly 2070 Super-level performance, at least for that particular game.
I think it comes from the comments that the Minecraft raytracing performance was similar to that on a 2060. But that was clearly refering to the raytracing part, not to the rest of the brutepower. So my guess is that it has similar power to a 2080 with raytracing capabilities similar to a 2060.
 

Tovarisc

Member
Oct 25, 2017
24,427
FIN
it's less about node and more about how it's sounding like Nvidia is boosting clocks quite a bit. 10/8nm is still more dense than 16/12nm after all. AMD already gave us expectations for RDNA2, Nvidia hasn't

I wouldn't listen to that Moores Law guy too much, he seems to bit all over the place. His stuff about DLSS 3.0 was bollocks that he had to backtrack on. Wouldn't be surprised if a lot of his stuff is unreliable. Maybe I have missed some coverage from e.g. Igor, but isn't Moore alone talking about these huge clock boosts that NV allegedly is going for?
 

low-G

Member
Oct 25, 2017
8,144
People keep reading WAY too much into 7nm Vs. 8nm when it comes to power consumption and heat generation.

if AMD ends up out performing NV in this cycle it wont be because of minor difference in node process.

Normally I'd disagree. I mean if AMD manages to out perform Nvidia by 10%, and their power consumption and die sizes are the same, then it is the node (that made the difference)...
 

dgrdsv

Member
Oct 25, 2017
11,879
About AMD being 7nm while Nvidia is "only" 8nm.
TSMC's N7 and Samsung's 8LPP to be precise. Neither is 7 or 8 in nanometers and both are significantly larger.

Nvidia is currently 10nm
TSMC's 12FFN which is essentially 16FF+. So "16nm" really.

while AMD is 7nm and Nvidia demolishes AMD in performance and performance per watt. The architecture is very important, only focusing on node size is a bit stupid..
Of course. Still some processes can be better than others when all else is equal.

it's less about node and more about how it's sounding like Nvidia is boosting clocks quite a bit.
They are preparing for a fight in the high end and are making sure that they will be able to push the chips to their maximum if needed. I dunno why people are so surprised by this.
 
Nov 8, 2017
13,109
People keep reading WAY too much into 7nm Vs. 8nm when it comes to power consumption and heat generation.

if AMD ends up out performing NV in this cycle it wont be because of minor difference in node process.

TSMC 7nm is definitely better but the Samsung is presumably cheaper per mm^2. I expect nvid chips to be a lot bigger but not necessarily more expensive to make.
 

Kieli

Self-requested ban
Banned
Oct 28, 2017
3,736
If a PS5 is 10.28TF at 36 CU, does this mean the RDNA2 will be 22.8TF?
 

scabobbs

Member
Oct 28, 2017
2,103
The XSX is far more powerful than a 2060.

For reference, the XSX Gears 5 quick demo port performed at around the same as a pc with a 2950x + 2080ti.
Now this doesnt mean the gpu is 2080ti level in the xsx, but a better comparison thrown around for the xsx gpu is 2080s level
This is just wrong, i see people spreading this lie around quite frequently... go watch the video you're referencing again. It performed at or around an rtx 2080 (non Super, definitely NOT at Ti levels) in gears 5.
 

dgrdsv

Member
Oct 25, 2017
11,879

eonden

Member
Oct 25, 2017
17,084
TSMC's N7 and Samsung's 8LPP to be precise. Neither is 7 or 8 in nanometers and both are significantly larger.


TSMC's 12FFN which is essentially 16FF+. So "16nm" really.


Of course. Still some processes can be better than others when all else is equal.


They are preparing for a fight in the high end and are making sure that they will be able to push the chips to their maximum if needed. I dunno why people are so surprised by this.
Well, still the general idea is that this card generation AMD is not shrinking the node while Nvidia is. And that is when Nvidia still had a more efficient nodes before even while being bigger. All this doom and gloom about Nvidia having a bad generation this year is mainly due to teh 20xx generation "small jump" (which can be explained due to the addition of the tensor cores).
 

dgrdsv

Member
Oct 25, 2017
11,879
Well, still the general idea is that this card generation AMD is not shrinking the node while Nvidia is. And that is when Nvidia still had a more efficient nodes before even while being bigger. All this doom and gloom about Nvidia having a bad generation this year is mainly due to teh 20xx generation "small jump" (which can be explained due to the addition of the tensor cores).
Doom and gloom rarely has anything to do with anything, it's just people who don't know shit making assumptions on incorrect understanding of h/w.
TGP figures which were attributed to upcoming Ampere cards are fine, not that much higher if at all than those of Turing.
And considering that there will likely be a top end competitor this time it's not at all surprising to see NV trying to push the cards to their maximum - nobody knows if they'll have to but it's better to prepare.
I'm sure that AMD is doing the exact same thing with Navi 21, and if they'll actually manage to close the perf/watt gap then it is very likely that AMD's top end cards will be just as hot as NV's this time.
 

Dictator

Digital Foundry
Verified
Oct 26, 2017
4,931
Berlin, 'SCHLAND
I am curious how these cards will perform in the already released RT titles, and which RT effects scale on them. It would be neat to see different effects types scaling differently accross architectures, like we see with Pascal vs. Turing. Pascal can kinda do rt shadows or high roughness cutoff RT reflections, not great, but it can to a degree at 1080p and maybe 30fps targets, but it absolutely cannot do GI or anything where ray direction is more erratic.

Regarding AMD drivers and software complaints - I think their driver UI is pretty nice these days, a bit too nestled for my taste regarding how many sub clicks it requires to get to something, but still, it is fast. Their problem with their drivers for me as a reviewer has to do with Overhead in DX9-DX11 titles where midrange CPU or GPUs really drag performance down and also legacy title support. OGL and older DX9 games on AMD tend to have problems in my experience, like the witcher 2 I tried at the beginning of this year. It ran worse on the RX 5700 than the GTX 1060 due to legacy issues.
 

dgrdsv

Member
Oct 25, 2017
11,879
Their problem with their drivers for me as a reviewer has to do with Overhead in DX9-DX11 titles where midrange CPU or GPUs really drag performance down and also legacy title support. OGL and older DX9 games on AMD tend to have problems in my experience
They've gone all in on D3D12/VK recently and with RDNA being new and requiring new driver for older APIs this mean that it will suffer there.
RDNA does fairly well under D3D11 though, surprisingly well sometimes even. The elimination of GCN execution issues did some wonders there.
 

brain_stew

Member
Oct 30, 2017
4,731
Their problem with their drivers for me as a reviewer has to do with Overhead in DX9-DX11 titles where midrange CPU or GPUs really drag performance down and also legacy title support.

This has always put me off buying an AMD card. While I'm sure it's less of an issue now that I'm running a 3700x with 3800mhz memory, I just struggle with the idea of a GPU upgrade effectively downgrading my CPU in a huge portion of my games library. I get that Vulkan/DX12 support is much better on modern games but it's not universal and a lot of what I play isn't modern games. I can't help shrugging the feeling that I'm being short changed if upgrading one component causes another component to perform worse and it's not something that is usually picked up on in your average GPU review.

My frame rate preferences are similar to John at Digital Foundry in that I value frametime consistency over everything else and a locked 60fps is my number 1 priority. I'm not particularly bothered about frame rates above 60fps, but I will notice any frametimes that drop from the target 16.67ms. Even in legacy titles, that is a much harder goal to hit than many realise.

They've gone all in on D3D12/VK recently and with RDNA being new and requiring new driver for older APIs this mean that it will suffer there.
RDNA does fairly well under D3D11 though, surprisingly well sometimes even. The elimination of GCN execution issues did some wonders there.

It's CPU overhead that I'm concerned about. Do AMD's driver still effectively reduce your CPU performance (vs. Nvidia) in CPU bound scenarios when using DX9-11? I'd love to see some updated benchmarks on this if they've finally made some progress.

Going all in on DX12/Vulkan means nothing to me when the majority of the PC gaming library isn't running on these APIs. It just comes off as a way to handwave away the terrible CPU overhead that has been present in their DX9-11 drivers for what must be at least a decade now.
 
Last edited:

BeI

Member
Dec 9, 2017
5,980
Their problem with their drivers for me as a reviewer has to do with Overhead in DX9-DX11 titles where midrange CPU or GPUs really drag performance down and also legacy title support. OGL and older DX9 games on AMD tend to have problems in my experience, like the witcher 2 I tried at the beginning of this year. It ran worse on the RX 5700 than the GTX 1060 due to legacy issues.

Yeah I'm surprised they still haven't been able to do anything about that. I've seen cases where people have done benchmarks of older games through Proton on Linux and got better performance than running through DX9 in Windows. Better performance with a compatibility layer than native.
 

dgrdsv

Member
Oct 25, 2017
11,879
It's CPU overhead that I'm concerned about. Do AMD's driver still effectively reduce your CPU performance (vs. Nvidia) in CPU bound scenarios when using DX9-11?
Yes. The CPU portion of AMD driver is still considerably behind NV's, and I don't think that they've improved it at all over the last years and likely won't even try to with their focus on 12/VK.

Going all in on DX12/Vulkan means nothing to me when the majority of the PC gaming library isn't running on these APIs.
Well, yeah, but have you seen their AAA performance though? They are fine in those games which everyone benchmark. Never mind that it's about 1 out of 10 releases on PC and in the other 9 they are sometimes comically behind.
 

brain_stew

Member
Oct 30, 2017
4,731
.
Yes. The CPU portion of AMD driver is still considerably behind NV's, and I don't think that they've improved it at all over the last years and likely won't even try to with their focus on 12/VK.

Well, yeah, but have you seen their AAA performance though? They are fine in those games which everyone benchmark. Never mind that it's about 1 out of 10 releases on PC and in the other 9 they are sometimes comically behind.

And this is the rub, if AMD aren't willing to invest in improving performance in games that don't appear in your standard GPU review suite then why would I consider them for my next GPU?

If I'm spending £400-£500 on a new GPU, then I don't want performance in older CPU bound titles to regress.

DirectX 11 is over a decade old and in that whole time they've never invested the development time to remove the CPU overhead that their drivers introduce. How am I supposed to have confidence in their drivers when they've shown that level of incompetence for a decade straight?
 

chris 1515

Member
Oct 27, 2017
7,074
Barcelona Spain
I'm referring to the the minecraft demo, which is pathtraced, and (very) early performance is on par with a 2060. I would hope they can bump that up with more dev time into it


I wonder if jumping on the bandwagon early with Navi 1X would have done AMD some good. I figure getting an early run allowed Nvidia to make changes for the better

Not only optimization, final devkit only released in June and the demo was in march 2020.
 
Last edited:

renx

Member
Jan 3, 2020
330
I've never seen a console with higher clocks than the same tech on a discrete PC version.
The question is, are we expecting 2500Mhz gpus here, or is it just PS5 doing some crazy magic?
If AMD brings those kind of clocks to the table, increases the number of cores (which is confirmed), and improves IPC (confirmed according to AMD), then we trully have a monster GPU.
 

Deleted member 11517

User requested account closure
Banned
Oct 27, 2017
4,260
I am curious how these cards will perform in the already released RT titles, and which RT effects scale on them. It would be neat to see different effects types scaling differently accross architectures, like we see with Pascal vs. Turing. Pascal can kinda do rt shadows or high roughness cutoff RT reflections, not great, but it can to a degree at 1080p and maybe 30fps targets, but it absolutely cannot do GI or anything where ray direction is more erratic.

Regarding AMD drivers and software complaints - I think their driver UI is pretty nice these days, a bit too nestled for my taste regarding how many sub clicks it requires to get to something, but still, it is fast. Their problem with their drivers for me as a reviewer has to do with Overhead in DX9-DX11 titles where midrange CPU or GPUs really drag performance down and also legacy title support. OGL and older DX9 games on AMD tend to have problems in my experience, like the witcher 2 I tried at the beginning of this year. It ran worse on the RX 5700 than the GTX 1060 due to legacy issues.
Yeah, but that isn't even all, if you want to record / stream AMD GPUs aren't going to be the first choice - do you see this changing in the near future (or ever)?
 

TC McQueen

Member
Oct 27, 2017
2,592
Yeah I'm surprised they still haven't been able to do anything about that. I've seen cases where people have done benchmarks of older games through Proton on Linux and got better performance than running through DX9 in Windows. Better performance with a compatibility layer than native.
Honestly, I think a lot of the reason behind continued bad performance with AMD drivers on DX9 is the fact that they don't have the resources to go back and optimize that stuff, plus AMD doubling down on DX12/Vulkan support. A lot of games with older version of DirectX tend to run better if you use DXVK or D9VK to force them to use Vulkan.
 

jett

Community Resettler
Member
Oct 25, 2017
44,657
People complaining about AMD drivers and claiming Nvidia's are so much better?

what_year_is_it.jpg