not saying it might not be good product, I dont think 5700 cards were bad either, but the jumbo before the launch was over the top once againNavi 21 should be a good product, a first one from AMD since 2013's Hawaii which will be capable of competing with NV's high end cards.
This time they are very quiet actually so there's a chance that they won't overpromise and underdeliver. Also they should be at both feature and performance parity with NV for the first time since 2014 really.not saying it might not be good product, I dont think 5700 cards were bad either, but the jumbo before the launch was over the top once again
Holy shit... Series X's GPU out specced inside the year lol. I had heard Moorslawisdead talk about a 24tflop AMD GPU a couple of months ago, is that bs or a more powerful version of the one you're talking about being 17tflops?
To be fair, considering that AMD's old marketing team - which got axed and moved over to Intel - was the source of many leaks for the architectures leading up to that release, I'm not surprised that the hype went out of control. The marketing guys were probably intentionally releasing info to get free marketing and consumer attention.not saying it might not be good product, I dont think 5700 cards were bad either, but the jumbo before the launch was over the top once again
If AMD does not have an answer to DLSS I don't see myself getting it in the future.
If AMD does not have an answer to DLSS I don't see myself getting it in the future.
Might be RDNA4. I figure we'd see some kind of tensor accelerators in their server cards before we would see them in their gaming cards, much like how Nvidia did it
Oh please don't start these marketing posts for a technology that Nvidia is using exclusively... they don't need your help (their market gap overtook Intel) and the market needs competition.
DLSS 2.0 is great but it's not without flaws. The main problem is it is proprietary, very few games use it or will use it.
Do we know anything about AMD's hardware raytracing implementation?
the Series X on par with a 2060 would be pretty damn disappointing, but the Minecraft demo was extremely earlyWe had Microsoft's Series X demo of Minecraft RTX that performed rightly on par with an RTX 2060. I personally expect RT performance to be more inline with first generation RTX cards, which could leave them quite far behind Nvidia's offerings if we see the big leap in RT performance many are expecting.
With Cyberpunk being the biggest PC release this year and supporting the full suite of RT effects plus DLSS 2.0, that's not going to play well for AMD.
the Series X on par with a 2060 would be pretty damn disappointing, but the Minecraft demo was extremely early
The XSX is far more powerful than a 2060.the Series X on par with a 2060 would be pretty damn disappointing, but the Minecraft demo was extremely early
12GB GDDR6 is plenty.
I'm referring to the the minecraft demo, which is pathtraced, and (very) early performance is on par with a 2060. I would hope they can bump that up with more dev time into itThe XSX is far more powerful than a 2060.
For reference, the XSX Gears 5 quick demo port performed at around the same as a pc with a 2950x + 2080ti.
Now this doesnt mean the gpu is 2080ti level in the xsx, but a better comparison thrown around for the xsx gpu is 2080s level
I wonder if jumping on the bandwagon early with Navi 1X would have done AMD some good. I figure getting an early run allowed Nvidia to make changes for the betterI find it impressive that we have hardware ray tracing in these consoles at all.
I would agree that simply matching first generation RTX performance in ray tracing isn't going to be good enough in the PC space. Especially, given the lack of Tensor cores.
The recent horrendous RX 5xxx black screen crashing event aside, that hasn't really been true at all. A few years ago they started developing big end of year updates, asking the community what features they want and (mostly) delivering, half a year back they launched a redesigned driver control panel that really makes nVidia's look antiquated by comparison.
Both companies have had awful drivers - a few years back you had nVidia drivers apparently actually killing cards. AMD has had a bad reputation for their drivers but they have been steadily improving them for some time now and granted, the recent black screen crash issue was awful, absolutely terrible PR, but that mostly seems to have been resolved now, thankfully.
I guess my friends and I are the lucky ones as we've been on mostly AMD cards for about a decade now and we've never really had many issues with them. The most recent - and egregious - would be the issue mentioned above, but of my friends only I had it and 19.12.1 was luckily solid as a rock for me.
I wonder if jumping on the bandwagon early with Navi 1X would have done AMD some good. I figure getting an early run allowed Nvidia to make changes for the better
It didn't performed on par with a 2060. 1080p at "well above 30fps, but not locked to 60" is basically the 2080 performance on it.We had Microsoft's Series X demo of Minecraft RTX that performed roughly on par with an RTX 2060. I personally expect RT performance to be more inline with first generation RTX cards, which could leave them quite far behind Nvidia's offerings if we see the big leap in RT performance many are expecting.
With Cyberpunk being the biggest PC release this year and supporting the full suite of RT effects plus DLSS 2.0, that's not going to play well for AMD.
I'm pretty sure there was some patent, people were talking about a year ago. Mostly speculation tbh. Int the end I think, we have to wait for official information to know the actual hardware solution AMD is going for. And for that we probably have to wait at least for the official reveal.NV has developed basically its own processing units just for DLSS and ray tracing. Do we know how AMD is going about it, using GPUs main core for rasterization and ray tracing simultaneously?
the Series X on par with a 2060 would be pretty damn disappointing, but the Minecraft demo was extremely early
Well, you can look at it this way. Console devs are going to find ways to better optimize their ray tracing performance on consoles and those optimizations will find there way up to PC.I find it impressive that we have hardware ray tracing in these consoles at all.
I would agree that simply matching first generation RTX performance in ray tracing isn't going to be good enough in the PC space. Especially, given the lack of Tensor cores.
I'm pretty sure there was some patent, people were talking about a year ago. Mostly speculation tbh. Int the end I think, we have to wait for official information to know the actual hardware solution AMD is going for. And for that we probably have to wait at least for the official reveal.
it's less about node and more about how it's sounding like Nvidia is boosting clocks quite a bit. 10/8nm is still more dense than 16/12nm after all. AMD already gave us expectations for RDNA2, Nvidia hasn'tPeople keep reading WAY too much into 7nm Vs. 8nm when it comes to power consumption and heat generation.
if AMD ends up out performing NV in this cycle it wont be because of minor difference in node process.
For reference, the XSX Gears 5 quick demo port performed at around the same as a pc with a 2950x + 2080ti.
Now this doesnt mean the gpu is 2080ti level in the xsx, but a better comparison thrown around for the xsx gpu is 2080s level
I think it comes from the comments that the Minecraft raytracing performance was similar to that on a 2060. But that was clearly refering to the raytracing part, not to the rest of the brutepower. So my guess is that it has similar power to a 2080 with raytracing capabilities similar to a 2060.I don't know where you read this. Digital Foundry said that when MS showed them the Gears 5 demo, it ran slightly worse on Series X than it does on a RTX 2080 (non-Super). This places the Series X at roughly 2070 Super-level performance, at least for that particular game.
it's less about node and more about how it's sounding like Nvidia is boosting clocks quite a bit. 10/8nm is still more dense than 16/12nm after all. AMD already gave us expectations for RDNA2, Nvidia hasn't
People keep reading WAY too much into 7nm Vs. 8nm when it comes to power consumption and heat generation.
if AMD ends up out performing NV in this cycle it wont be because of minor difference in node process.
TSMC's N7 and Samsung's 8LPP to be precise. Neither is 7 or 8 in nanometers and both are significantly larger.
TSMC's 12FFN which is essentially 16FF+. So "16nm" really.
Of course. Still some processes can be better than others when all else is equal.while AMD is 7nm and Nvidia demolishes AMD in performance and performance per watt. The architecture is very important, only focusing on node size is a bit stupid..
They are preparing for a fight in the high end and are making sure that they will be able to push the chips to their maximum if needed. I dunno why people are so surprised by this.it's less about node and more about how it's sounding like Nvidia is boosting clocks quite a bit.
People keep reading WAY too much into 7nm Vs. 8nm when it comes to power consumption and heat generation.
if AMD ends up out performing NV in this cycle it wont be because of minor difference in node process.
This is just wrong, i see people spreading this lie around quite frequently... go watch the video you're referencing again. It performed at or around an rtx 2080 (non Super, definitely NOT at Ti levels) in gears 5.The XSX is far more powerful than a 2060.
For reference, the XSX Gears 5 quick demo port performed at around the same as a pc with a 2950x + 2080ti.
Now this doesnt mean the gpu is 2080ti level in the xsx, but a better comparison thrown around for the xsx gpu is 2080s level
Density isn't something inherent to a process and is a result of design too. Bigger chips are easier to cool.TSMC 7nm is definitely better but the Samsung is presumably cheaper per mm^2. I expect nvid chips to be a lot bigger but not necessarily more expensive to make.
Sure, if you're okay with a 500W GPU.If a PS5 is 10.28TF at 36 CU, does this mean the RDNA2 will be 22.8TF?
Density isn't something inherent to a process and is a result of design too. Bigger chips are easier to cool.
Sure, if you're okay with a 500W GPU.
Well, still the general idea is that this card generation AMD is not shrinking the node while Nvidia is. And that is when Nvidia still had a more efficient nodes before even while being bigger. All this doom and gloom about Nvidia having a bad generation this year is mainly due to teh 20xx generation "small jump" (which can be explained due to the addition of the tensor cores).TSMC's N7 and Samsung's 8LPP to be precise. Neither is 7 or 8 in nanometers and both are significantly larger.
TSMC's 12FFN which is essentially 16FF+. So "16nm" really.
Of course. Still some processes can be better than others when all else is equal.
They are preparing for a fight in the high end and are making sure that they will be able to push the chips to their maximum if needed. I dunno why people are so surprised by this.
Doom and gloom rarely has anything to do with anything, it's just people who don't know shit making assumptions on incorrect understanding of h/w.Well, still the general idea is that this card generation AMD is not shrinking the node while Nvidia is. And that is when Nvidia still had a more efficient nodes before even while being bigger. All this doom and gloom about Nvidia having a bad generation this year is mainly due to teh 20xx generation "small jump" (which can be explained due to the addition of the tensor cores).
They've gone all in on D3D12/VK recently and with RDNA being new and requiring new driver for older APIs this mean that it will suffer there.Their problem with their drivers for me as a reviewer has to do with Overhead in DX9-DX11 titles where midrange CPU or GPUs really drag performance down and also legacy title support. OGL and older DX9 games on AMD tend to have problems in my experience
Their problem with their drivers for me as a reviewer has to do with Overhead in DX9-DX11 titles where midrange CPU or GPUs really drag performance down and also legacy title support.
They've gone all in on D3D12/VK recently and with RDNA being new and requiring new driver for older APIs this mean that it will suffer there.
RDNA does fairly well under D3D11 though, surprisingly well sometimes even. The elimination of GCN execution issues did some wonders there.
Their problem with their drivers for me as a reviewer has to do with Overhead in DX9-DX11 titles where midrange CPU or GPUs really drag performance down and also legacy title support. OGL and older DX9 games on AMD tend to have problems in my experience, like the witcher 2 I tried at the beginning of this year. It ran worse on the RX 5700 than the GTX 1060 due to legacy issues.
Yes. The CPU portion of AMD driver is still considerably behind NV's, and I don't think that they've improved it at all over the last years and likely won't even try to with their focus on 12/VK.It's CPU overhead that I'm concerned about. Do AMD's driver still effectively reduce your CPU performance (vs. Nvidia) in CPU bound scenarios when using DX9-11?
Well, yeah, but have you seen their AAA performance though? They are fine in those games which everyone benchmark. Never mind that it's about 1 out of 10 releases on PC and in the other 9 they are sometimes comically behind.Going all in on DX12/Vulkan means nothing to me when the majority of the PC gaming library isn't running on these APIs.
Yes. The CPU portion of AMD driver is still considerably behind NV's, and I don't think that they've improved it at all over the last years and likely won't even try to with their focus on 12/VK.
Well, yeah, but have you seen their AAA performance though? They are fine in those games which everyone benchmark. Never mind that it's about 1 out of 10 releases on PC and in the other 9 they are sometimes comically behind.
I'm referring to the the minecraft demo, which is pathtraced, and (very) early performance is on par with a 2060. I would hope they can bump that up with more dev time into it
I wonder if jumping on the bandwagon early with Navi 1X would have done AMD some good. I figure getting an early run allowed Nvidia to make changes for the better
Yeah, but that isn't even all, if you want to record / stream AMD GPUs aren't going to be the first choice - do you see this changing in the near future (or ever)?I am curious how these cards will perform in the already released RT titles, and which RT effects scale on them. It would be neat to see different effects types scaling differently accross architectures, like we see with Pascal vs. Turing. Pascal can kinda do rt shadows or high roughness cutoff RT reflections, not great, but it can to a degree at 1080p and maybe 30fps targets, but it absolutely cannot do GI or anything where ray direction is more erratic.
Regarding AMD drivers and software complaints - I think their driver UI is pretty nice these days, a bit too nestled for my taste regarding how many sub clicks it requires to get to something, but still, it is fast. Their problem with their drivers for me as a reviewer has to do with Overhead in DX9-DX11 titles where midrange CPU or GPUs really drag performance down and also legacy title support. OGL and older DX9 games on AMD tend to have problems in my experience, like the witcher 2 I tried at the beginning of this year. It ran worse on the RX 5700 than the GTX 1060 due to legacy issues.
Honestly, I think a lot of the reason behind continued bad performance with AMD drivers on DX9 is the fact that they don't have the resources to go back and optimize that stuff, plus AMD doubling down on DX12/Vulkan support. A lot of games with older version of DirectX tend to run better if you use DXVK or D9VK to force them to use Vulkan.Yeah I'm surprised they still haven't been able to do anything about that. I've seen cases where people have done benchmarks of older games through Proton on Linux and got better performance than running through DX9 in Windows. Better performance with a compatibility layer than native.