• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.
  • We have made minor adjustments to how the search bar works on ResetEra. You can read about the changes here.

Dictator

Digital Foundry
Verified
Oct 26, 2017
4,931
Berlin, 'SCHLAND
Checkerboard and temporal injection, when done right, looks reasonably close for a fraction of the perf/material cost. Yes I've seen that DF video on Death Stranding, but most of the time the difference only becomes obvious when you freeze frame and zoom in which is something people... don't do when gaming.
You watch the part about the video about how it is most obvious in motion due to temporal aliasing?
 
Oct 25, 2017
41,368
Miami, FL
this sounds really good.

just as others, I need to see the constellation of software and features that support this hardware before I can feel one way or another about it. Having experienced DLSS 2.0 on my 2080Ti with Control and Death Stranding, I simply cannot take backward steps. If AMD has a proper answer to DLSS somehow, I'll give it strong consideration. But time is of the essence; the sooner they "leak" some details or even do a full blown conference and announcement, the better. As in, before the 3000 series cards are up for purchase.
 

Magio

Member
Apr 14, 2020
647
You watch the part about the video about how it is most obvious in motion due to temporal aliasing?

Yes. And I could barely tell the difference, as in I could tell the DLSS solution was definitely superior but not in a way where I'd ever complain about how it looked with checkerboarding in real time while playing.
 

Shocchiz

Member
Nov 7, 2017
577
Yes. And I could barely tell the difference, as in I could tell the DLSS solution was definitely superior but not in a way where I'd ever complain about how it looked with checkerboarding in real time while playing.
I think you are very wrong, unfortunately there's no "to me" space when some tech is objectively so superior.
I also don't get the "it's better not to have the dlss" part, dlss would immensely benefit any hardware, expecially weak one.
Not having it (or something comparable) is a huge huge huge loss for both big navi and next gen consoles (expecially next gen consoles).
 

Corralx

Member
Aug 23, 2018
1,176
London, UK
I feel like AMD has always been bad when it comes to anything other than competent hardware and price. Software and driver support has always been okay at best. Sometimes their drivers have been so bad that people just cannot recommend their cards.

AMD needs to completely rethink their card strategy here. It hasn't been about just hardware for a while.

Have you used an AMD card recently?
This was absolutely true in the Directx 11 era, but today AMD drivers are very solid and perform amazingly in DirectX 12/Vulkan workloads.
Has been at least a couple years since AMD turned back their driver situation.
In fact, I've had more issues in the past year with Nvidia GPUs rather than AMD.
 

Armaros

Member
Oct 25, 2017
4,901
Have you used an AMD card recently?
This was absolutely true in the Directx 11 era, but today AMD drivers are very solid and perform amazingly in DirectX 12/Vulkan workloads.
Has been at least a couple years since AMD turned back their driver situation.
In fact, I've had more issues in the past year with Nvidia GPUs rather than AMD.

The people that had to deal with massive driver issues for the 5000 series beg to differ. the AMD subreddit and everything online was flooded with driver issues about that series for months.
 

Shocchiz

Member
Nov 7, 2017
577
Have you used an AMD card recently?
This was absolutely true in the Directx 11 era, but today AMD drivers are very solid and perform amazingly in DirectX 12/Vulkan workloads.
Has been at least a couple years since AMD turned back their driver situation.
In fact, I've had more issues in the past year with Nvidia GPUs rather than AMD.
I have Vega, had a 2080ti too, and I can confirm Vega drivers are completely fine.
I was told RDNA1 was a different story, for months, but problems should have been fixed.
 

Magio

Member
Apr 14, 2020
647
I think you are very wrong, unfortunately there's no "to me" space when some tech is objectively so superior.
I also don't get the "it's better not to have the dlss" part, dlss would immensely benefit any hardware, expecially weak one.
Not having it (os something comparable) is a huge huge huge loss.

I'm sorry, I'll try to have a problem with how checkerboarded games look to me?

As for why I'm glad there's no DLSS on consoles, I made myself clear: I'm perfectly fine with checkerboard/temporal injection, and those methods don't need costly dedicated hardware, therefore I'm glad neither Sony nor MS dedicated hardware to a DLSS-like solution in the consoles. In 2020 with a console hardware budget, it would have seemed like a waste to me. (Not even taking into account the INSANE sunk cost Nvidia has in AI in general.)
 

JahIthBer

Member
Jan 27, 2018
10,382
Have you used an AMD card recently?
This was absolutely true in the Directx 11 era, but today AMD drivers are very solid and perform amazingly in DirectX 12/Vulkan workloads.
Has been at least a couple years since AMD turned back their driver situation.
In fact, I've had more issues in the past year with Nvidia GPUs rather than AMD.
From what i remember, DX12/Vulkan make drivers far less important, so i wouldn't be surprised if it's AMD "improving" by proxy, they really do need to fix their DX11 stuff, they seem to only want to focus on consoles & ryzen at the moment, then act surprised when Nvidia gets 80% GPU marketshare.
 

gothmog

Member
Oct 28, 2017
2,434
NY
Have you used an AMD card recently?
This was absolutely true in the Directx 11 era, but today AMD drivers are very solid and perform amazingly in DirectX 12/Vulkan workloads.
Has been at least a couple years since AMD turned back their driver situation.
In fact, I've had more issues in the past year with Nvidia GPUs rather than AMD.
In the past 2 or so years? No. Couldn't get my last card (580) stable on Windows 10 so I returned it.
 

Nooblet

Member
Oct 25, 2017
13,632
You watch the part about the video about how it is most obvious in motion due to temporal aliasing?
One thing I'd say though and it's more about how games are in general these days is that in motion, as in when you are moving the camera games these days look really blurry due to all the post processing, especially so if it's at 30FPS. I often end up having a hard time noticing sub native resolution while playing even if I notice it obviously during cutscenes or while I'm slowly observing the world.
 

Corralx

Member
Aug 23, 2018
1,176
London, UK
The people that had to deal with massive driver issues for the 5000 series beg to differ. the AMD subreddit and everything online was flooded with driver issues about that series for months.

I had a 5700XT since launch and played tens of AAA games on release in the past year without a single issue.
The only issue I encountered is broken lighting on the 3D character model in the inventory menu of The outer Worlds, that was quickly fixed.

Not saying issues are not there, drivers are complex beasts and have plenty of bugs each release, but in my experience the Nvidia driver quality degraded recently (plenty of comments and posts about that as well), while AMD driver quality improved a lot compared to the GCN era.
In my experience they are very comparable, if not even a bit more stable on the AMD side.

From what i remember, DX12/Vulkan make drivers far less important, so i wouldn't be surprised if it's AMD "improving" by proxy, they really do need to fix their DX11 stuff, they seem to only want to focus on consoles & ryzen at the moment, then act surprised when Nvidia gets 80% GPU marketshare.

It's not really that easy.
And if it was the case, it wouldn't explain why it would have helped AMD driver quality, but actually degraded Nvidia driver quality.
 

wachie

Banned
Oct 25, 2017
526
It's a tangible benefit, but Turing has been out for 2 years and a lousy 14 games support it. I'm looking at charts and it shows an additional 25 games have scheduled support for the future.

The hyperbole that some are using (not you) is a bit much. For this to be a truly killer feature, it would benefit all games. I literally don't own a single title that has DLSS support.
Nvidia's devrel is unlike AMD's, so you expect more and more flagship titles to adopt it. The smaller games don't tax the GPU that much and DLSS may not even be required there.

So yeah, having DLSS in cutting edge games matters a lot and not every game out there needs DLSS.
 

Uhtred

Alt Account
Banned
May 4, 2020
1,340
Yes. And I could barely tell the difference, as in I could tell the DLSS solution was definitely superior but not in a way where I'd ever complain about how it looked with checkerboarding in real time while playing.

And you are basing this of course, on seeing the two being rendered natively on your screen, and NOT just by looking at a highly compressed Youtube video, right?
 

Unkindled

Member
Nov 27, 2018
3,247
AMD has a lot of catching up to do regarding DLSS tech. Nvidia has 2 year's step up with it and they are far ahead in AI learning compared to AMD.
 

Magio

Member
Apr 14, 2020
647
And you are basing this of course, on seeing the two being rendered natively on your screen, and NOT just by looking at a highly compressed Youtube video, right?

I don't have the luxury of having an expensive PC, so no. But what I did do, is actually play through Death Stranding twice on PS4 Pro, and never being bothered in any way by those things that the DF video told me I was supposed to notice looking significantly worse with checkerboarding vs DLSS. I can tell the latter is superior, I'm not blind and DF did a great job with that comparison, all I'm saying is that the difference doesn't justify ditching checkerboarding and having dedicated tensor cores-like hardware in a console in 2020, when taking into account what it would cost. You're free to disagree.
 

LCGeek

Member
Oct 28, 2017
5,857
AMD playing catch up is the problem. They need to not only catch up to Nvidia but have exciting new shit too to be more than a cheaper alternative.

amd gpu division is on the mend it will be a while before they catch up to nvidia as overall package.

it's not a problem unless you want and to go back to a past that is useless for them and us as consumers.
 

Li bur

Member
Oct 27, 2017
363
AMD playing catch up is the problem. They need to not only catch up to Nvidia but have exciting new shit too to be more than a cheaper alternative.

I still don't get why AMD trying to compete in the Armrace with Nvidia. Maybe... Focus on APU instead? Are there any technical reason behind it?

For instance, I'd be very interested to get APU that is on par with PS5 performance/play next gen game on medium.
 

eonden

Member
Oct 25, 2017
17,085
I still don't get why AMD trying to compete in the Armrace with Nvidia. Maybe... Focus on APU instead? Are there any technical reason behind it?

For instance, I'd be very interested to get APU that is on par with PS5 performance/play next gen game on medium.
APU development works thanks to being both in CPU and GPU (as APU is really CPU and GPU in the same die). They need to compete with Nvidia at least on the middle end for their APU development (and with discreet Intel GPUs for the low end).
 

Dictator

Digital Foundry
Verified
Oct 26, 2017
4,931
Berlin, 'SCHLAND
One thing I'd say though and it's more about how games are in general these days is that in motion, as in when you are moving the camera games these days look really blurry due to all the post processing, especially so if it's at 30FPS. I often end up having a hard time noticing sub native resolution while playing even if I notice it obviously during cutscenes or while I'm slowly observing the world.
Yeah 30 fps on camera turns is just INTENSE to say the least. I always forget what it is like and then I do some cross platform test and am a tiny bit shocked at how unclear games can be at 30 when the camera moves.
Let's hope that goes away as much as possible next gen.
 
Oct 27, 2017
5,618
Spain
DLSS is way better than those techniques though, there is a reason why Switch users want it so badly in Switch 2 over the generic temporal reconstructions.
DLSS is not way better than those techniques, at least relative to the improvement of using those techniques versus 1-to-1 pixel TAA. Checkerboard rendering is pretty damn good and has allowed Sony exclusives to consistently output 4K images rendering 50 % the pixels. Insomniac's temporal injection takes frames rendered at about 1440p and gets really close to 4K. Same with Unreal Engine's TAAU, just look at the Unreal Engine 5 demo, or at Gears 5. All of those titles look a lot better than if they would be rendering at the "true" pixel counts without reconstruction techniques. And most of these games don't see the reconstruction techniques transfer to their PC version when/if they get ported.

Take Death Stranding for example, the PS4 Pro is outputting 95% of the quality of a 4K image, while an RX 580 runs at 20 FPS at that resolution due to the fact checkerboard rendering is missing on the PC port. Considering it's a pure software implementation in the Decima engine, it almost feels like it's a disingenuous omission to push DLSS.
 

Calabi

Member
Oct 26, 2017
3,490
AMD has a lot of catching up to do regarding DLSS tech. Nvidia has 2 year's step up with it and they are far ahead in AI learning compared to AMD.

Yeah its kind of crazy how far behind Nvidia they are, it seems like if AMD aren't going to bother with machine learning at all then they might as well get out of the tech industry completely. Because at the rate everyone else is progressing and the amount of features Nvidia is touting for their cards it surely would feel a bit embarrassing for AMD to come out and just announce "Its a bit faster" to woo's and cheer's no matter how much faster it is. By the time AMD does realise they need machine learning and some of these other features like the IO interface stuff it will be way to late. I expect even Intel will start using machine learning for their CPU's and they could easily catch up or even take over with who knows what optimisations from that. Everyone in the tech industry should be using it by now.
 

inner-G

Banned
Oct 27, 2017
14,473
PNW
AMD has a lot of catching up to do regarding DLSS tech. Nvidia has 2 year's step up with it and they are far ahead in AI learning compared to AMD.
DLSS, ray tracing, live streaming features, memory compression, driver issues, consumer opinion...

They have a LOT of catching up to do. Large Navi needs to compete on more than just raster performance, or be dirt cheap.
 

Dr. Doom

Banned
Oct 29, 2017
1,509
More than likely, yeah. Use a calculator to get the exact number.

outervision.com

Power Supply Calculator - PSU Calculator | OuterVision

Power Supply Calculator - Select computer parts and our online PSU calculator will calculate the required power supply wattage and amperage for your PC.
My results:

So load wattage is ~580W based on the aforementioned components, with a 626W recommended PSU. How accurate is this calculator?

What kind of troubles would one run into by maxing out the PSU? System instability? PSU shutdown under high load?
 

Readler

Member
Oct 6, 2018
1,972
My results:

So load wattage is ~580W based on the aforementioned components, with a 626W recommended PSU. How accurate is this calculator?

What kind of troubles would one run into by maxing out the PSU? System instability? PSU shutdown under high load?
I'd always shave off ~50W from these calculators. 650W will be more than fine.

The absolute worst thing that would happen, the total worst case scenario, the one thing everyone is so afraid of...is your PC shutting itself off with (in case it's a quality PSU) no damage whatsoever to anything. That's it.
Good PSUs are designed to be absolutely withstand constant load at their max capacity. People got wrong mental models of PSUs by equating them to cars or human bodies, where constantly doing tasks at full utilisation will harm components; this is not the case with PSUs. Obviously you want some headroom for sudden bursts and also wear and tear, but people do like to overdo it.
Yes, PSUs are at their most efficient when at around 50% capacity, but we're talking about single digit percentage points here. Especially since most of the time, assuming you're not gaming all the time whilst on your PC, you will be between 0-20% capacity, which is where your PSU is the least efficient.

tl;dr: 650W is enough.
 

Dr. Doom

Banned
Oct 29, 2017
1,509
I'd always shave off ~50W from these calculators. 650W will be more than fine.

The absolute worst thing that would happen, the total worst case scenario, the one thing everyone is so afraid of...is your PC shutting itself off with (in case it's a quality PSU) no damage whatsoever to anything. That's it.
Good PSUs are designed to be absolutely withstand constant load at their max capacity. People got wrong mental models of PSUs by equating them to cars or human bodies, where constantly doing tasks at full utilisation will harm components; this is not the case with PSUs. Obviously you want some headroom for sudden bursts and also wear and tear, but people do like to overdo it.
Yes, PSUs are at their most efficient when at around 50% capacity, but we're talking about single digit percentage points here. Especially since most of the time, assuming you're not gaming all the time whilst on your PC, you will be between 0-20% capacity, which is where your PSU is the least efficient.

tl;dr: 650W is enough.
I see, thanks.
 

Caz

Attempted to circumvent ban with alt account
Banned
Oct 25, 2017
13,055
Canada
So this came up in a 3000 series article by TweakTown: https://www.tweaktown.com/news/7491...rds-tight-supply-until-end-of-year/index.html
They'd also want to wait and see what AMD does with its own RDNA 2 reveal, but different sources tell me AMD is on a roll with Ryzen and that "Big Navi will just drag them down". Wowzers. So now I really want to see how Big Navi goes -- and whether it can get anywhere close to the GeForce RTX 3080, let along their new BFGPU-powered GeForce RTX 3090.
Setting aside TT's reliability or lack thereof when it comes to rumors, the wording doesn't sound good if i'm reading this right i.e. Big Navi is going to be another Vega and bring them down after RDNA 1 helped bring back competition to the low and mid-range market and Ryzen is kicking Intel around.
 

dgrdsv

Member
Oct 25, 2017
11,885
I just want to see what happens when we get Ampere cards with the same cores counts as Turing cards. So if we got a 4352 core 3060, would it end up weaker than a 2080 ti, making Ampere less performant per TFLOP than Turing? Although maybe less bandwidth would be the issue there.
Turing flops are "augmented" by Turing INTs - this was always a thing which people are ignoring for some reason. NV's estimates show that about a third of typical gaming code are INT and not FP which means that Turing hit about +30% in it's flops utilization because it can run a third of gaming code alongside FP code on INT ALUs.

A straight comparison to Ampere which can run either 32 FP lanes or 16 FP and 16 INT ones per clock isn't even possible since this is a different h/w execution configuration. Generally speaking at worst you're getting the same throughput per clock as Turing (when there's 50/50 split in the code between FP and INT or when there's >50% INTs) and twice the throughput at best (when there's 100% FP instructions which on Turing would lead to INT SIMD being idle).

Real world results will likely be somewhere in the middle, with Ampere SM hitting around 133-150% or Turing's SM throughput. This will inevitably lead to people saying stupid things like "Ampere flops are worse than Turing" while what this would mean is that Ampere is actually using more flops when they are needed while Turing was idling the INT SIMD while running FP heavy code.

Another point to consider is that RDNA2 is 32 or 64 wide FP32 (with INTs being ran on the same FP32 SIMDs) which would fit Ampere config a lot better than Turing's most likely so Ampere flops in the actual gaming code will probably be fine, it's the other - compute and HPC stuff - which will kinda suffer here most likely due to INT throughput being the same as on Turing.

So to answer that question: Ampere with 4352 FP32 SPs will be slower than Turing with 4352 FP32 SPs because such Turing card actually had 8704 SPs with half of them being FP32 and half INT32 with the latter half handling about a third of calculations. Ampere with 4352 FP32 SPs will have either that or 4352 FP32+INT32 halves but not the double of that number since it won't be able to run FP32 and INT32 concurrently at all times. This will obviously mean that "the same" number of SPs will be some 30% slower than Turing's - but this won't actually be "the same" number.
 
Oct 25, 2017
41,368
Miami, FL
Yeah 30 fps on camera turns is just INTENSE to say the least.
It really is.

I was trying to tell people about my experience with FF7:R on PS4. Down in the sectors with all those brown shades, panning around literally made me queasy. I had to repeatedly stop and start just to see things in the environment if I was looking for something.

Death to 30fps.
 

Deleted member 19533

User requested account closure
Banned
Oct 27, 2017
3,873
My results:

So load wattage is ~580W based on the aforementioned components, with a 626W recommended PSU. How accurate is this calculator?

What kind of troubles would one run into by maxing out the PSU? System instability? PSU shutdown under high load?
The calculators overestimate as well. Consider it's also doing it based off everything going full tilt, which doesn't really happen.

More overhead is great, but not needed. If you have the money and want it, do it. However, it's not necessary.
 

dgrdsv

Member
Oct 25, 2017
11,885
This was absolutely true in the Directx 11 era, but today AMD drivers are very solid and perform amazingly in DirectX 12/Vulkan workloads.
That's because these drivers have moved the work which they were doing (badly) in DX11 from the drivers to game developers. And while AMD does usually perform good in DX12/VK there's a crapload of titles which are straight up broken in DX12/VK with no way of fixing them through drivers - as was possible in DX11.
 

Hero_Select

One Winged Slayer
Banned
Oct 27, 2017
2,008
The people that had to deal with massive driver issues for the 5000 series beg to differ. the AMD subreddit and everything online was flooded with driver issues about that series for months.
I can confirm this as well. I went from a flawless 980Ti experience to a 5700XT that would constantly crash and give me BSOD for months before something stable enough came out.

It works now but that's not something I want to have to deal with again. Even if Big Navi matches 3080, that initial 5700XT experience plus all these neat features coming out of nvidia have pretty much pushed me back to them.
 

Kuro

Member
Oct 25, 2017
20,653
It really is.

I was trying to tell people about my experience with FF7:R on PS4. Down in the sectors with all those brown shades, panning around literally made me queasy. I had to repeatedly stop and start just to see things in the environment if I was looking for something.

Death to 30fps.
FF7 has really bad ghosting in motion because of its anti aliasing solution so that probably contributed to that. 30fps isn't ideal by any means but there was a compounding issue there.
 

Uhtred

Alt Account
Banned
May 4, 2020
1,340
DLSS is not way better than those techniques, at least relative to the improvement of using those techniques versus 1-to-1 pixel TAA. Checkerboard rendering is pretty damn good and has allowed Sony exclusives to consistently output 4K images rendering 50 % the pixels. Insomniac's temporal injection takes frames rendered at about 1440p and gets really close to 4K. Same with Unreal Engine's TAAU, just look at the Unreal Engine 5 demo, or at Gears 5. All of those titles look a lot better than if they would be rendering at the "true" pixel counts without reconstruction techniques. And most of these games don't see the reconstruction techniques transfer to their PC version when/if they get ported.

Take Death Stranding for example, the PS4 Pro is outputting 95% of the quality of a 4K image, while an RX 580 runs at 20 FPS at that resolution due to the fact checkerboard rendering is missing on the PC port. Considering it's a pure software implementation in the Decima engine, it almost feels like it's a disingenuous omission to push DLSS.

The PS4 version is NOT running at the graphics settings that get you 20 FPS on a 580 though. Every benchmark I saw giving it a ~25 FPS rating was running at NATIVE 4k and max settings.
 
Last edited:

Akronis

Prophet of Regret - Lizard Daddy
Banned
Oct 25, 2017
5,451
Have you used an AMD card recently?
This was absolutely true in the Directx 11 era, but today AMD drivers are very solid and perform amazingly in DirectX 12/Vulkan workloads.
Has been at least a couple years since AMD turned back their driver situation.
In fact, I've had more issues in the past year with Nvidia GPUs rather than AMD.

AMD's driver overhead for DX11 titles is still bad and the vast majority of titles are DX11.

I won't consider AMD unless they fix it, because it's clear that DX11 is going to still be around for years.
 

Bashteee

Member
Oct 27, 2017
1,193
I can confirm this as well. I went from a flawless 980Ti experience to a 5700XT that would constantly crash and give me BSOD for months before something stable enough came out.

I had problems with Nvidia and the 2080 too. Games and Windows 10 were crashing all the time, the latest Anno game would corrupt the whole system until I had to reload my bios settings. Took 3 updates and a few changes in some random configuration files to fix that.

Latest Metro completely unplayable with DX12, no fix available at the time but it only crashed directly to desktop.

Can't share the sentiment that Nvidia has superior driver, this goes straight against my experience. It would not surprise me to see similar problems with Ampere.
 

Corralx

Member
Aug 23, 2018
1,176
London, UK
AMD's driver overhead for DX11 titles is still bad and the vast majority of titles are DX11.

I won't consider AMD unless they fix it, because it's clear that DX11 is going to still be around for years.

I think AMD has improved the D3D11 situation quite a bit as well, but I cannot share the sentiment about it being around for years to come. D3D11 is on life support.
In fact Microsoft has created a D3D11-on-D3D12 emulation layer and the actual support for D3D11 in the drivers will likely be phased out soon.
This autumn is gonna be a huge transition with the next gen approaching and once the very few D3D11 titles currently in development will be released, it'll be just legacy. I wouldn't expect a huge investment from anyone in improving D3D11 going on.
 

spool

Member
Oct 27, 2017
773
I'm a Linux user and I don't know how much overlap that driver has with the Windows one, but I had a lot of issues with my Vega56 and crashes for months. It was eventually sorted out and it's been smooth riding since, but it left a bad taste in my mouth.

I still don't think I have much choice but to get an RDNA2 card when I upgrade though, because while AMD has had some issues they actually care about the overall experience on Linux, unlike Nvidia. I hope they're either competitive with the new Nvidia cards, or at least a decent amount cheaper.
 
Apr 4, 2018
4,513
Vancouver, BC
It's not really that easy.
And if it was the case, it wouldn't explain why it would have helped AMD driver quality, but actually degraded Nvidia driver quality.

From what I understand there is much more architectural fragmentation/complexity on the Nvidia side, which is why they've been having more issues since the 10/20 series than they had in the past. For example, even cards within the same series that seem close (like a 1070 vs 1070 ti) can have more architectural differences than expected, causing more complexity for drivers/engineers.

I've had the same experience as you, in that AMD drivers have significantly improved over the past couple of years (and I'd say thier Software has gotten significantly better too), while Nvidia's have degraded in some cases. Nvidia's drivers are still generally good (and seem to have improved recently), but there have been cases where new-ish games have seen persistent crashing/system locks on specific Nvidia cards for certain games.
 

OberstKrueger

Member
Jan 7, 2018
591
I bought a 5700 at launch. For the first month or two, I had random but infrequent enough crashes. I can't recall having any sort of crash since then. So whatever issues were present then have since been worked out, at least for what I'm throwing at it.

I would love if BigNavi could compete with the 3080, but as long as it gets to about 3070 or a bit higher, I'll be happy. I'm using AMD for throwing the hand-me-downs for Linux and Mac purposes, so any improvement there is welcomed.
 

Shocchiz

Member
Nov 7, 2017
577
As AMD is giving us no info I tried to do some math.
Considering these official specifications:
12 TFLOPS, 52 CU @ 1825 MHz - RDNA2 [Xbox Series X specs]
9.75 TFLOPS, 40 CU a 1755 MHz - RDNA1 [5700XT Specs]

What I get is
12 TFLOPS, 52 CU RDNA2 = [12/52=] 0,23 (single CU Tflops) x 80 (rumored CUs of Big Navi) = 18,4 Tflops @ 1825 Mhz = 22,08 TFlops @ 2190MHZ <--- BIG NAVI???

Considering RDNA1 Tflops I get
10 TFLOPS (tried to match Xbox frequency, just for the sake of comparison), 40 CU RDNA1 = [10/40=] 0,25 single CU TFlops

So my questions:
- did I calculate it wrong?
- where are ipc RDNA2 improvements?
- how in the world could Big Navi beat a 3080?
 

Blade30

Member
Oct 26, 2017
4,613
I think someone linked this in another thread but I can't find it. Anyway here is Jayz thoughts on RDNA2 and if it can compete with Ampere. I'm watching it now.