• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.
  • We have made minor adjustments to how the search bar works on ResetEra. You can read about the changes here.

TheNerdyOne

Member
Oct 28, 2017
521
Memory bandwidth of 3080 is only 1.7x of 2080 so that may bottleneck rather than TF. We haven't seen actual RDNA2 perf with the predicted CU/memory configurations so I don't see how you can compare perf/TF.

its really simple, they haven't done anything architecturally that we've seen revealed thus far that indicates any regression in perf/TF vs rdna1 here, and infact we know of several new features that rdna2 supports that rdna1 doesn't that will mean more perf/tf. But even doing this math assuming no gains whatsoever, we can say for a fact that perf/TF is better on rdna2 than on ampere for gaming. (rdna1 perf/tf is roughly the same as turing perf/tf, meanwhile ampere perf/tf is much lower than turing perf/tf and rdna1 perf/tf), so there's your point of comparison for this napkin math estimation. Basically, with everything known about rdna2 its impossible for perf/tf to have regressed at all, let alone regressed as much as ampere did vs turing. Its not really a useful metric comparing different architectures in either case, its just funny that perf/TF used to be a big nvidia talking point in these kinds of discussions and now it suddenly doesnt matter at all, just like power draw no longer matters at all and it used to be the most important thing in the world.
 

bruhaha

Banned
Jun 13, 2018
4,122
its really simple, they haven't done anything architecturally that we've seen revealed thus far that indicates any regression in perf/TF vs rdna1 here, and infact we know of several new features that rdna2 supports that rdna1 doesn't that will mean more perf/tf. But even doing this math assuming no gains whatsoever, we can say for a fact that perf/TF is better on rdna2 than on ampere for gaming. (rdna1 perf/tf is roughly the same as turing perf/tf, meanwhile ampere perf/tf is much lower than turing perf/tf and rdna1 perf/tf), so there's your point of comparison for this napkin math estimation. Basically, with everything known about rdna2 its impossible for perf/tf to have regressed at all, let alone regressed as much as ampere did vs turing. Its not really a useful metric comparing different architectures in either case, its just funny that perf/TF used to be a big nvidia talking point in these kinds of discussions and now it suddenly doesnt matter at all, just like power draw no longer matters at all and it used to be the most important thing in the world.

How can you conclude Ampere regressed when memory bandwidth wasn't scaled equally with TF? How do you know the high end RDNA2 sku that they end up putting on the market will scale mem bandwidth appropriately since we don't even know if they'll do HBM2 or GDDR6?
 

TheNerdyOne

Member
Oct 28, 2017
521
How can you conclude Ampere regressed when memory bandwidth wasn't scaled equally with TF? How do you know the high end RDNA2 sku that they end up putting on the market will scale mem bandwidth appropriately since we don't even know if they'll do HBM2 or GDDR6?

16gbps GDDR6 on a 256 bit bus vs 2 stacks of HBM2 makes absolutely no difference in terms of bandwidth, if you're curious (both would be 512gbps). 4 stacks of hbm2 would be a rather dramatic difference of course, so you're right we don't know, but we do know that in practice, in the real world, the 30TF rtx 3080 is only 1.5 - 1.8x faster than the 10TF RTX 2080, so a regression in perf/TF. memory bandwidth wont scale linearly on ANY of these gpus because you can't triple memory bandwidth on a gpu in a single generation. If memory bandwdith becomes the bottleneck so be it. in terms of the shaders themselves we're not expecting a regression on rdna2, and infact we expect gains as i said. we 100% know for a fact that there's a regression on ampere because half of the cuda cores in a given SM can do int OR fp32 on any given cycle, but not a mix of the two. meaning that if there's even a single int instruction issued on a given cycle to an SM, the actual FP32 performance of that SM that cycle is cut cleanly in half. That kinda makes things a complete and utter mess for trying to figure this shit out in practice, and infact makes it impossible to REALLY determine perf/TF, because TF will vary cycle to cycle and game to game, its really weird.
 

bruhaha

Banned
Jun 13, 2018
4,122
we do know that in practice, in the real world, the 30TF rtx 3080 is only 1.5 - 1.8x faster than the 10TF RTX 2080, so a regression in perf/TF.

No, we know how 30TF with 760GBps bandwidth compares with 10TF and 448GBps. On workloads that obviously need to access memory frequently to rasterize, probably gigabytes of data per frame. They're not drawing flat textures or just tessellating geometry in the games tested. That texture and geometry data required for rasterization obviously doesn't fit in GPU cache so bandwidth is going to be a factor, not just compute speed.
 

Theswweet

RPG Site
Verified
Oct 25, 2017
6,417
California
I think it's clear that these leaks are promising, and show that there's at the very least a real possibility that AMD may be able to compete with Ampere in terms of rasterization performance. But, that's just that - a possibility. We need to see benchmarks and real testing before we can say anything more than that.

It's still much more promising than I was expecting just a few weeks ago.
 

Lakeside

Member
Oct 25, 2017
9,221
How can you conclude Ampere regressed when memory bandwidth wasn't scaled equally with TF? How do you know the high end RDNA2 sku that they end up putting on the market will scale mem bandwidth appropriately since we don't even know if they'll do HBM2 or GDDR6?

When you are at war and in the heat of battle it all becomes clear.
 

TheNerdyOne

Member
Oct 28, 2017
521
No, we know how 30TF with 760GBps bandwidth compares with 10TF and 448GBps. On workloads that obviously need to access memory frequently to rasterize, probably gigabytes of data per frame. They're not drawing flat textures or just tessellating geometry in the games tested. That texture and geometry data required for rasterization obviously doesn't fit in GPU cache so bandwidth is going to be a factor, not just compute speed.

bandwidth is indeed a factor, and you can't increase it linearly with shading performance, and that has always been the case. in the real world, we're seeing performance regressions per TF. in the real world. you'll see performance increases per TF with rdna2. and none of that actually matters in the first place. Actual, real world rasterization performance is the primary performance metric for this generation of gpus. Next gen will be absolutely all about the RT (in 2 more years when most of the RT software being made explicitly because amd put RT hardware in the consoles starts coming out). while 99% of software is rasterization only, with no DLSS and no RT, pure raw raster perf is the key metric and in that metric RDNA2 has a real shot at blowing the doors off of nvidia. which is why all the nvidia fanboys are crying about DLSS, because its all they've got left to argue with, and even that is only presently supported by 14 games, out of the tens of thousands of pc games that exist.
 

Spoit

Member
Oct 28, 2017
3,989
Can you explain why the performance per teraflop matters? I could maybe understand why we might care about perf/watt or other metrics that are actually based on real metrics. But TFs are a ridiculous metric to begin with, and trying to build calcs on top of them just seems like a pointless endeavor
 

Polyh3dron

Prophet of Regret
Banned
Oct 25, 2017
9,860
I'm glad we agree that DLSS is far from irrelevant.
When those few DLSS games include Fortnite, Call of Duty: Black Ops Cold War, Cyberpunk 2077, Death Stranding, and Control, it's a bit disingenuous to say that DLSS support doesn't matter due to the small number of games it supports, when those are some of the biggest games out there.
 

TheNerdyOne

Member
Oct 28, 2017
521
This. Especially when it comes to AMD GPU's. It all look good on paper but I'll wait for real world performance before making any claims.

on paper the 3080 looks 3x as fast as the 2080, in reality its 50 - 80% faster, depending on the resolution you're comparing them at.... so "especially when it comes to amd gpus" is complete and utter horseshit when nvidia just pulled off the most underwhelming real world performance vs paper specs in the history of gpus. but yeah, keep talkin that talk.
 

TheNerdyOne

Member
Oct 28, 2017
521
Can you explain why the performance per teraflop matters? I could maybe understand why we might care about perf/watt or other metrics that are actually based on real metrics. But TFs are a ridiculous metric to begin with, and trying to build calcs on top of them just seems like a pointless endeavor

perf/watt is something we can actually talk about in some fashion. 238W rdna2 is 2.3x faster than 225w rdna1. if we again assume that rdna2 gains zero performance over rdna1 architecturally, thats the single biggest perf/watt improvement in the modern history of gpus, and makes nvidia's near identical perf/watt (its 7% better per watt than the turing flagship if you measure at 4k, and worse per watt at 1440p, despite a node shrink) look downright silly, since there wasn't even a node shrink here.
 

ThatNerdGUI

Prophet of Truth
Member
Mar 19, 2020
4,551
on paper the 3080 looks 3x as fast as the 2080, in reality its 50 - 80% faster, depending on the resolution you're comparing them at.... so "especially when it comes to amd gpus" is complete and utter horseshit when nvidia just pulled off the most underwhelming real world performance vs paper specs in the history of gpus. but yeah, keep talkin that talk.
That's if you don't understand the architecture. Going around comparing cards performance by their TF number across different architectures is the dumbest thing someone can do.
 

Serene

Community Resettler
Member
Oct 25, 2017
52,534
Promising numbers. Hopefully they actually amount to something when it's all said and done.

on paper the 3080 looks 3x as fast as the 2080, in reality its 50 - 80% faster, depending on the resolution you're comparing them at.... so "especially when it comes to amd gpus" is complete and utter horseshit when nvidia just pulled off the most underwhelming real world performance vs paper specs in the history of gpus. but yeah, keep talkin that talk.

AMD will be ok man they don't need you to do this
 

TheNerdyOne

Member
Oct 28, 2017
521
That's if you don't understand the architecture. Going around comparing cards performance by their TF number across different architectures is the dumbest thing someone can do.

then let's look at perf/watt, its near identical to turing per watt despite a node shrink, meanwhile rdna2 on the same node as rdna1 appears to be ~2x perf/watt vs rdna1, see the above post.
 

TheNerdyOne

Member
Oct 28, 2017
521
Yeah, both post are incoherent since perf/watt relates to efficiency and not to direct performance gain.

the power draw is nearly the same, while delivering 2.3x the performance of the previous generation.... meanwhile nvidia is blowing 35% more power to deliver.... 35% more performance than the 2080Ti.... so uh.... yeah....... what the fuck do you think is gonna happen if amd decides to ship a 350w behemoth like nvidia just did? when their 240w card already looks like it could be faster than nvidia's 350w one?
 

ThatNerdGUI

Prophet of Truth
Member
Mar 19, 2020
4,551
the power draw is nearly the same, while delivering 2.3x the performance of the previous generation.... meanwhile nvidia is blowing 35% more power to deliver.... 35% more performance than the 2080Ti.... so uh.... yeah....... what the fuck do you think is gonna happen if amd decides to ship a 350w behemoth like nvidia just did? when their 240w card already looks like it could be faster than nvidia's 350w one?
Boy you're in for disappointment.
 

TheNerdyOne

Member
Oct 28, 2017
521
Boy you're in for disappointment.
you say that, but you can provide absolutely nothing to back up the statement. meanwhile i can (and did, in the other thread) provide the actual specs for these gpus, which back up my statements a lot more than "nuh uh its unpossible!!!1!!" can back up yours and any other nvidia fans who desperately wish it weren't so. when the single biggest (and only real argument) any nvidia fan can muster on this forums is "but dlss!!!", that says a LOT. when nvidia drops the price of their flagship non titan grade card from $1200 to $700 in a single generation, you know they're afraid, and that they absolutely know that rdna2 is coming to play. if there were nothing to fear from amd why drop prices at all? because they did, the 3080 is the 2080Ti equivalent and the 3090 is the titan equivalent here, and both use the large XX102 die, there's no larger die nvidia can use for "3080 Ti" here, so the 3080 is it, and nvidia cut the price nearly in half gen on gen. If they knew they were going to continue to utterly dominate the performance discussion, they would have kept pricing the same or increased it like they did with turing when they knew amd had nothing. Use some common sense, its plainly obvious.
 
Last edited:

ThatNerdGUI

Prophet of Truth
Member
Mar 19, 2020
4,551
you say that, but you can provide absolutely nothing to back up the statement. meanwhile i can (and did, in the other thread) provide the actual specs for these gpus, which back up my statements a lot more than "nuh uh its unpossible!!!1!!" can back up yours and any other nvidia fans who desperately wish it weren't so.

Navi 22 underlined in red. Does that look like 2.3x performance increase over a 5700XT?

llrbXuv_d.webp


Now here you have numbers. I'm not making a crazy assumption based on misconceptions.
 

TheNerdyOne

Member
Oct 28, 2017
521
Navi 22 underlined in red. Does that look like 2.3x performance increase over a 5700XT?

llrbXuv_d.webp


Now here you have numbers. I'm not making a crazy assumption based on misconceptions.

navi 22 isn't what has 2.3x the performance of a 5700XT, navi 21 is. navi 21 is ~240w to the 5700XTs 225w, and is 2.3x faster

Navi 22, 40CU at 2.5ghz is, by the numbers ~35 - 40% faster than a 5700Xt and oh hey guess what? so is a 2080Ti and oh hey guess what? this is matching that gpu, and we have an actual game doing in practice what the gpu should do on paper too, so hey you only made my argument strogner, and yes, ashes is a game, even if its a shit one. so what now? Navi 22 is 170w incase you were curious, so 35 - 40% faster than a 5700XT while drawing 25% less power (narrower parts at higher clockspeeds are less efficient than wider parts at lower clockspeeds, also incase you were curious) Also , something weird is going on with the MASSIVE variant in 2080ti scores in the ashes benchmark on that screenshot if you didn't notice. either way, good job forgetting navi 21 entirely.
 

ThatNerdGUI

Prophet of Truth
Member
Mar 19, 2020
4,551
navi 22 isn't what has 2.3x the performance of a 5700XT, navi 21 is. navi 21 is ~240w to the 5700XTs 225w, and is 2.3x faster

Navi 22, 40CU at 2.5ghz is, by the numbers ~35 - 40% faster than a 5700Xt and oh hey guess what? so is a 2080Ti and oh hey guess what? this is matching that gpu, and we have an actual game doing in practice what the gpu should do on paper too, so hey you only made my argument strogner, and yes, ashes is a game, even if its a shit one. so what now? Navi 22 is 170w incase you were curious, so 35 - 40% faster than a 5700XT while drawing 25% less power (narrower parts at higher clockspeeds are less efficient than wider parts at lower clockspeeds, also incase you were curious)
No. It's actually ~20% compared to the 5700XT, and 6200 is 2080 Super numbers. Again, you will be disappointed.
 
Last edited:

TheNerdyOne

Member
Oct 28, 2017
521
No. It's actually ~20% compared to the 5700XT. Again, you will be disappointed.

want a ban bet on whether tha 40CU rdna2 part is only 20% faster than 5700XT or if its a fair bit more than that on average? put your account where your mouth is, go on, do it. Nvidia fans are the ones that are disappointed with this news, it means amds statement that rdna2 will do to nvidia what zen did to intel is absolutely true, and if rnda continues the performance trend gen on gen that zen has seen? nvidia's going to struggle to keep up on the inferior, shit samsung nodes they're being forced to use, because again, amd is tsmcs largest customer by wafer volume now, and most definitely a more important customer for tsmc than nvidia, which is why they had to use a shit 2 year old samsung node in the first place for the consumer gpu release.
 

kami_sama

Member
Oct 26, 2017
7,005
Well, if it works as intended, great for AMD!
I will be a bit miffed due to my fomo, but it's not that big of a deal to have a 3080 instead of an rdna2.
 

TheNerdyOne

Member
Oct 28, 2017
521
Well, if it works as intended, great for AMD!
I will be a bit miffed due to my fomo, but it's not that big of a deal to have a 3080 instead of an rdna2.

of course it isn't, the 3080 is a great card and i've never once said otherwise. I did say nvidia lied about its performance, and havent moved the needle in terms of efficiency really at all for gaming vs the previous gen, and that's all 100% true. its still a great card.
 

Cats

Member
Oct 27, 2017
2,929
I would be very interested in this if it can just brute force over a 3080. DLSS and accompanying features are cool, but they really only exist in the big new releases and that's not where I usually game. My biggest love is VR and I need as much raw power to run applications like VRchat best I can, especially with the G2 coming soon and its huge resolution bump.

I hope they can make this fill that role because I would be totally in, even if it ends up completely missing features similar to DLSS and such. I wouldn't' honestly care.
 

TheNerdyOne

Member
Oct 28, 2017
521
I would be very interested in this if it can just brute force over a 3080. DLSS and accompanying features are cool, but they really only exist in the big new releases and that's not where I usually game. My biggest love is VR and I need as much raw power to run applications like VRchat best I can, especially with the G2 coming soon and its huge resolution bump.

I hope they can make this fill that role because I would be totally in, even if it ends up completely missing features similar to DLSS and such. I wouldn't' honestly care.

is VRchat demanding? i don't have a vr headset so i havent played it personally, but visually based on screenshots it doesn't look like much. bad optimization? or more than meets the eye going on?
 

ThatNerdGUI

Prophet of Truth
Member
Mar 19, 2020
4,551
want a ban bet on whether tha 40CU rdna2 part is only 20% faster than 5700XT or if its a fair bit more than that on average? put your account where your mouth is, go on, do it. Nvidia fans are the ones that are disappointed with this news, it means amds statement that rdna2 will do to nvidia what zen did to intel is absolutely true, and if rnda continues the performance trend gen on gen that zen has seen? nvidia's going to struggle to keep up on the inferior, shit samsung nodes they're being forced to use, because again, amd is tsmcs largest customer by wafer volume now, and most definitely a more important customer for tsmc than nvidia, which is why they had to use a shit 2 year old samsung node in the first place for the consumer gpu release.
If you think because I'm calling things like they are I'm an nVidia fan then you're just wrong and showing your fanboy colors. My last card was a Radeon VII which was equally hyped by fanboys and it turned out be just a pile of garbage. Like I said on my first post, I'll wait for real world performance tests and benchmarks to make any crazy claims, but you put your fanboy hat and went attacking everyone in the thread.
 

kami_sama

Member
Oct 26, 2017
7,005
of course it isn't, the 3080 is a great card and i've never once said otherwise. I did say nvidia lied about its performance, and havent moved the needle in terms of efficiency really at all for gaming vs the previous gen, and that's all 100% true. its still a great card.
Thankfully they released reviews a day in advance. I would be pissed if I really bought it with the "2x perf, 1.9x perf/watt" marketing shit.
I knew what I was going into.
 

TheNerdyOne

Member
Oct 28, 2017
521
Thankfully they released reviews a day in advance. I would be pissed if I really bought it with the "2x perf, 1.9x perf/watt" marketing shit.
I knew what I was going into.

yeah, i mean, i still wish reviews would go up like, a week or two in advance of sales, and that reviewers got product 3 - 4 weeks early , and had actual release drivers from go, so they could actually test things properly from the start, but that aint the world we live in, sadly :(.
 

NaDannMaGoGo

Member
Oct 25, 2017
5,967
navi 22 isn't what has 2.3x the performance of a 5700XT, navi 21 is. navi 21 is ~240w to the 5700XTs 225w, and is 2.3x faster

Navi 22, 40CU at 2.5ghz is, by the numbers ~35 - 40% faster than a 5700Xt and oh hey guess what? so is a 2080Ti and oh hey guess what? this is matching that gpu, and we have an actual game doing in practice what the gpu should do on paper too, so hey you only made my argument strogner, and yes, ashes is a game, even if its a shit one. so what now? Navi 22 is 170w incase you were curious, so 35 - 40% faster than a 5700XT while drawing 25% less power (narrower parts at higher clockspeeds are less efficient than wider parts at lower clockspeeds, also incase you were curious) Also , something weird is going on with the MASSIVE variant in 2080ti scores in the ashes benchmark on that screenshot if you didn't notice. either way, good job forgetting navi 21 entirely.

Where do you get that 240w is supposed to be all of Navi21 cards when everyone says it's just the GPU pulling that much and you need to account for ram, vrm efficiency and what else minor stuff there might be. I bet you even read that, too, but purposely write this drivel instead.
 

pswii60

Member
Oct 27, 2017
26,673
The Milky Way
Can we just be happy that we're seeing aggressive and healthy competition in the GPU space? That will only serve to push gains further as we move forward? It seems this isn't just another case of the AMD "Sonic-cycle" but that RDNA2 will a major improvement over RDNA1.

I don't get the GPU brand warrior mentality at all. I kind of try to understand console warriors to an extent, who have emotional attachment to certain game exclusives or the ecosystem perhaps, but here we're just talking about graphics cards that all do and play the same thing either faster or slower.
Thankfully they released reviews a day in advance. I would be pissed if I really bought it with the "2x perf, 1.9x perf/watt" marketing shit.
I knew what I was going into.
Sure they did for the 3080, but not the 3090.
 

TheNerdyOne

Member
Oct 28, 2017
521
Where do you get that 240w is supposed to be all of Navi21 cards when everyone says it's just the GPU pulling that much and you need to account for ram, vrm efficiency and what else minor stuff there might be. I bet you even read that, too, but purposely write this drivel instead.

the spec doesn't actually specify, even if we assume that's power for just the gpu alone, its still a sub 300w gpu and that still gives them more than 50% perf/watt, still beating the claim amd actually made. There are navi10 entries in the same files showing 180w power draw, but they also only show a 1400mhz clockspeed and not their actual release clocks, not even the base clock, so its a bit weird.
 

NaDannMaGoGo

Member
Oct 25, 2017
5,967
Can we just be happy that we're seeing aggressive and healthy competition in the GPU space? That will only serve to push gains further as we move forward? It seems this isn't just another case of the AMD "Sonic-cycle" but that RDNA2 will a major improvement over RDNA1.

I don't get the GPU brand warrior mentality at all. I kind try to understand console warriors to an extent, who have emotional attachment to certain game exclusives or the ecosystem perhaps, but here we're just talking about graphics cards that all do and play the same thing either faster or slower.

Yeah it gets annoying when super fanboys who literally can't stop posting nonstop overhype things to an obnoxious degree. There's plenty to be cautiously optimistic about but Jesus Christ...
 

Lampa

Member
Feb 13, 2018
3,586
Interested to see how these shape up in the end, I don't think AMD will be able to compete with the top of the line nvidia cards, especially not in RT/ML department, where nvidia has a clear advantage over anyone, but maybe they could come close in raw performance. They also need to fix the drivers and all that, AMD cards are pretty alright now, but owning one is still a headache, and I say that as someone who owns a RX 5700 XT.
 

Bucéfalo

Banned
May 29, 2020
1,566
Unless they put on the table something that compares to DLSS, RTX, tensor cores or RTX voice (best noise cancelling feature ever created), they hace nothing to do.
 

NaDannMaGoGo

Member
Oct 25, 2017
5,967
the spec doesn't actually specify, even if we assume that's power for just the gpu alone, its still a sub 300w gpu and that still gives them more than 50% perf/watt, still beating the claim amd actually made. There are navi10 entries in the same files showing 180w power draw, but they also only show a 1400mhz clockspeed and not their actual release clocks, not even the base clock, so its a bit weird.

Yeah and you just talk like it's 240w like a fact, as you do with seemingly everything. Just assume the best in everything and what do you know, suddenly the cards look phenomenal!
 

TheNerdyOne

Member
Oct 28, 2017
521
Yeah and you just talk like it's 240w like a fact, as you do with seemingly everything. Just assume the best in everything and what do you know, suddenly the cards look phenomenal!

i think the navi10 entry in there is for a radeon pro product since these files came from macs, which would explain the lower clockspeed too, as the
W5700X has a lower clockspeed than the gaming variant of the gpu. Would also mean the navi 21 entry is also a pro variant, and explain recent twitter posts from the usual suspects saying the desktop card is aiming for 2.3ghz+ and not the 2.2 in these files. In any event, if we take the worst case rather than the best case, they're still coming in ahead of amd's claims from early in the year, and they're still delivering performance that's competitive with or faster than the 3080 for raster.
 

NaDannMaGoGo

Member
Oct 25, 2017
5,967
i think the navi10 entry in there is for a radeon pro product since these files came from macs, which would explain the lower clockspeed too, as the
W5700X has a lower clockspeed than the gaming variant of the gpu. Would also mean the navi 21 entry is also a pro variant, and explain recent twitter posts from the usual suspects saying the desktop card is aiming for 2.3ghz+ and not the 2.2 in these files. In any event, if we take the worst case rather than the best case, they're still coming in ahead of amd's claims from early in the year, and they're still delivering performance that's competitive with or faster than the 3080 for raster.

I don't doubt AMD's comments on efficiency increases. It does all sound very promising.

But 240w or 300w, that happens to be a 60w / 25% increase. That's significant. That's me going from "holy shit at 240w for the card with 80 CUs @ 2.2 GHz?!? amazing!" to "that's pretty good".

When you do that to all the rumors and, in addition, downplay DLSS2.0, which is delivering amazing benefits to several titles right now and quite a few more announced, including certain-to-be-giant Cyberpunk 2077, then it's hard to take you seriously

Like, speculate all you want, it's enjoyable for many of us. But be honest and say "possibly 240w, though it might be 300" rather than omitting the latter possibility right from the get-go as if it's 240w confirmed for the entire card.
 

Bluelote

Member
Oct 27, 2017
2,024
clocks sound high until you realize even the PS5 targets 2.23GHz, so they must have worked on targeting some pretty high clocks,
also yes, these cards could be seriously fast, if you look at the 5700XT it didn't do badly and was a pretty small chip, if RDNA can scale well with more CUs...

Unless they put on the table something that compares to DLSS, RTX, tensor cores or RTX voice (best noise cancelling feature ever created), they hace nothing to do.

not really, if they have price and raw performance they are fine, Nvidia still priced their cards fairly high, so there is plenty of room for that.
the features you mentioned alone are nice, but DLSS is not used by every game, it requires the game to adopt it, RT implementation is an interesting point, but we can at least expect RDNA2 to match the consoles implementation, and that should be the main target for many games!? noise cancelling is not a gaming feature
 

Bucéfalo

Banned
May 29, 2020
1,566
clocks sound high until you realize even the PS5 targets 2.23GHz, so they must have worked on targeting some pretty high clocks,
also yes, these cards could be seriously fast, if you look at the 5700XT it didn't do badly and was a pretty small chip, if RDNA can scale well with more CUs...



not really, if they have price and raw performance they are fine, Nvidia still priced their cards fairly high, so there is plenty of room for that.
the features you mentioned alone are nice, but DLSS is not used by every game, it requires the game to adopt it, RT implementation is an interesting point, but we can at least expect RDNA2 to match the consoles implementation, and that should be the main target for many games!? noise cancelling is not a gaming feature
While what you sya about DLSS is true, it is also true that every relevant triple AAA game is going to implement it, you can count on it almost for every important release (same por RTX):

www.ign.com

50 Games with RTX and DLSS - IGN

Games with enhanced ray tracing and increased frame rates are on the rise, see the full list of RTX enabled games.

In the case of games that are not AAA, is pretty much irrelevant if they implement it or not, since they are not demanding enough and they just run fine without it

Regarding noise cancelling, while is not a gaming feature per se, is a great bonus to take into account. RTX voice is almost a blessing when playing and chating on discord at the same time.

I do not expect great changes in market share with the arrival of new AMD GPUs.

By late August, Nvidia had almost 80% of the market, and thats probably not going to change drastically.
 

DrDeckard

Banned
Oct 25, 2017
8,109
UK
I own a 3080 and I desperately want big Navi to be better than it. Nvidia needs some real competition and competition is good for all of us!
 

Buggy Loop

Member
Oct 27, 2017
1,232
clocks sound high until you realize even the PS5 targets 2.23GHz, so they must have worked on targeting some pretty high clocks,
also yes, these cards could be seriously fast, if you look at the 5700XT it didn't do badly and was a pretty small chip, if RDNA can scale well with more CUs...



not really, if they have price and raw performance they are fine, Nvidia still priced their cards fairly high, so there is plenty of room for that.
the features you mentioned alone are nice, but DLSS is not used by every game, it requires the game to adopt it, RT implementation is an interesting point, but we can at least expect RDNA2 to match the consoles implementation, and that should be the main target for many games!? noise cancelling is not a gaming feature

Unless they put on the table something that compares to DLSS, RTX, tensor cores or RTX voice (best noise cancelling feature ever created), they hace nothing to do.

Well that's the thing nobody in AMD camp will be able to spin

RDNA 2 has no dedicated ML cores

No tensor core equivalent.

All ML are done on in8-int4 on shaders. Sacrificing shader performances for any ML done.

So I would put a big warning to anyone considering AMD because of rasterization performances, especially since there will be no DirectML games nor benchmarks when the cards hit the stores this year : any directML implementation in games in the coming gen, be it for upscaling, automatic AI lip-sync, texture upscaling on the fly, AI approximated physic simulations (much much faster than traditional physics calculations), AI face reconstruction... and probably many thing we do not even have ideas yet as ML will surely be used by developers in some interesting ways, will, without a doubt, run much better on RTX. Not even taking Ampere's new sparse matrix tensor cores into consideration here. It's like bringing a knife to a gun fight.
 

JahIthBer

Member
Jan 27, 2018
10,382
The 40 CU GPU better be no more than $399. I expect AMD might try to match the 3070 though at $499.