With 2070s, you should be fine at least until Hopper.Bloody hell I only just bought a 2070 Super :( Oh well, time to start saving again.
Are they? RDNA1 in the form of Navi 10 hits 2.1 GHz pretty consistently.because the clockspeeds on thermal constrained consoles like the PS5 are insane for RDNA 2
In the overall standings, where less power is better, it's pretty easy to see how far AMD fell behind Nvidia prior to the Navi generation GPUs. The various Vega and Polaris AMD cards use significantly more power than their Nvidia counterparts. Even now, with 7nm Navi GPUs going up against 12nm GPUs, AMD is only roughly equal to Nvidia. How things will change with upcoming Nvidia Ampere and AMD Big Navi launches is something we're definitely looking forward to seeing, what with AMD's claims of 50% improvements in performance per watt with RDNA 2.
A100 lacks video outputs and I'm not even sure if there's Windows drivers for it right now?BTW, Nvidia just announced a PCiE version of the A100: 19.5 TFLOPs at 250W !
Maybe this is the GPU on the leak?
I mean this probably runs RT much faster, and has a massive amount of improvements because the 3D Mark scores doesn't mean everything. Plus this is probably the 3080 stock card, and will only be improved by aftermarket clocks.30% gains for Turing prices? So much for the hype.
Get the fuck out of here Nvidia.
Well, there's ways around the first, but you are probably right on the second.A100 lacks video outputs and I'm not even sure if there's Windows drivers for it right now?
So, lets assume the best case scenario (that 50% power efficiency jump), now AMD is finally on par... with a 2 years old Nvidia architecture, without counting the tensor and raytracing cores of the later. Congrats?
Let's assume hypothetically that 50% performance per watt increase figure was true for RDNA2, it would give Big Navi (with 80 compute units and 2ghz base clock) around 30/40 % more performance than the 2080ti.
Obviously we are not sure about anything with Big Navi, even the 80 compute figure is rumored.
Anyway like I said before, I'm pretty sure Nvidia as something great in the works with Ampere and I think they'll keep their power crown with this new generation of cards.
But if AMD can have more competitive cards, NVIDIA will have to align their pricing to AMD's and that's what we want as consumers.
Well, there's ways around the first, but you are probably right on the second.
It was my first ever Gaming PC build, so I think I'll be content.If you don't need top of the line all the time then you can ride until Hopper (4k series).
Your 2080Ti will still be a faster card than what 99% of PC gamers will have throughout the whole of 2021.
And now Shrink the 2080ti to 7nm, which the least we can expect from Ampere.
Having AMD to compete again with Nvidia would only be good for us consumers, but that is far from given, sadly
Me too!Bloody hell I only just bought a 2070 Super :( Oh well, time to start saving again.
ahould a 2070 Super keep me playing at 1080p/60FPS for a good couple years at least?
How does that work out? The 5700xt is only about as strong as a 2070 super, which is like a good 20-30% weaker than a 2080ti. I don't see how a 50% boost on top of that gets to 30-40% better than one? Unless you're expecting it to draw significantly more power?Let's assume hypothetically that 50% performance per watt increase figure was true for RDNA2, it would give Big Navi (with 80 compute units and 2ghz base clock) around 30/40 % more performance than the 2080ti.
2070S has been really good so far, at the moment I'm playing at 1080p so it just eats up most games easily at max settings. I'm getting RDR2 soon to see how it performs.
But yeah I hope 3080ti is a big jump.
How does that work out? The 5700xt is only about as strong as a 2070 super, which is like a good 20-30% weaker than a 2080ti. I don't see how a 50% boost on top of that gets to 30-40% better than one? Unless you're expecting it to draw significantly more power?
Do some of you really think AMD is going to even attempt to compete in the high end market again?
So, lets assume the best case scenario (that 50% power efficiency jump), now AMD is finally on par... with a 2 years old Nvidia architecture, without counting the tensor and raytracing cores of the later. Congrats?
I guess sarcasm isn't allowed anymore :(You're being weirdly hostile about this. We'll all find out when the cards appear but the consensus seems to be this is indeed the 3080. But it really doesn't matter...
Lol 3090 for Just 30% increase on a 2 years old gpu?I'd bet a dollar on this being the Ti/3090 model if the result is real and the recent rumours are somewhat accurate. Going to 7nm by itself does nothing for performance. What it does is allows you to put more stuff into that same amount of space (or alternatively go for decreased power consumption). The rumour is that the 3090 has around 5248 cuda cores and was benchmarked at ~1900MHz, 2080 Ti FE has 4352 cores and benchmarks typically at around 1800MHz. ~20% more cores, a bit more MHz, small IPC improvements, and pre-release drivers and around 30-35% seems very much in the ballpark. It's not like Turing architecture is suddenly poor and they can magically squeeze 30% extra just by architecture improvements at this point. Maybe AMD could do that because they were well behind in terms of high-end performance.
Everyone is expecting big improvements with ray tracing so they also need to have room for that on the board/die but the die size of 2080 Ti was already massive, I doubt they are going for another die of that size. Definitely not bigger than that. Also while the node change helps with power consumption, there's now so much new stuff being added that whatever breathing room they gained by going 7nm is suddenly all gone. It's not like they can go for increased power consumption vs 2080 Ti either, that thing was already on the limit of what's reasonable for consumer cards.
So add some extra MHz on core/memory (which isn't nearly as easy as it used to be with node changes, these things are so complex), small driver improvements and that 30-35% is suddenly in the 40-45% range which to me seems pretty good for 2020 all things considered. I am however expecting much bigger improvements with ray tracing.
The 980 Ti was generally around 50% faster than the 780 Ti too if not more in some instances wasn't it? The 970 in a lot of instances was pretty much neck and neck with it. These cards are supposedly going to be capable of going well over 2GHz. Engineering samples are clocked lower and obviously aren't using finished drivers.I just don't see how this is the 3080. Unless nvidia continues to make these cards on huge dies (like they did with the 2000 series which also made it more expensive), I don't see the 3080ti being more than 30% improvement.
30% increase is pretty typical for nvidia. Only a few times have they gone beyond that (1080ti being the most recent...that was 60%)
yes, we have a die shrink but if nvidia is going back to its typical 550mm2 size, the shrink will be negligible. Adding more transistors and shaders does not scale linearly, especially if you aren't willing or can't increase clock speeds.
The real gains from this card will be improved raytracing. I think we will see a huge upgrade in that area and that is what nvidia will be pushing come reveal.
while I hope this is the 3080, my guess is this is the 3080ti or 3090 or whatever you want to call it.
The 5700XT is a ~$400 card with 40CUs, its successor will probably be another ~$400 card, why wouldn't a 50% performance per watt increase not reach 2080Ti levels at probably lower power useage if the 5700XT was ~30% weaker?How does that work out? The 5700xt is only about as strong as a 2070 super, which is like a good 20-30% weaker than a 2080ti. I don't see how a 50% boost on top of that gets to 30-40% better than one? Unless you're expecting it to draw significantly more power?
The consensus is within ~5% of a 2080Ti which is roughly in line to what it has been prior to the 20-series but with the benefit of significantly improved ray tracing which is the area where it'd probably beat it, a 970 had an MSRP of less than half (and power draw) of the 780Ti and the 1070 was about $100 over being half the price of a 980Ti so why is it unrealistic for a 3070 to be around 5% of a 2080Ti but while more capable with ray tracing for also, less than half given there's going to actually be some competition this time around unlike 2018?I guess sarcasm isn't allowed anymore :(
Also I would be the first one jumping up and down if it's a 3080 that outperforms a 2080Ti by that much. I am getting the 3080Ti or the 3090Ti/Super regardless of what card is in the benchmarks. Just seems people's expectations are out of wack lol when you expect the 3070 to outperform the 2080Ti and cost half as much it's unrealistic and there is no harm in calling it out for it's ridiculousness.
But wasn't the 980ti the first real introduction to maxwell?The 980 Ti was generally around 50% faster than the 780 Ti too if not more in some instances wasn't it? The 970 in a lot of instances was pretty much neck and neck with it. These cards are supposedly going to be capable of going well over 2GHz. Engineering samples are clocked lower and obviously aren't using finished drivers.
But wasn't the 980ti the first real introduction to maxwell?
I think nvidia has pretty much maxed out the maxwell-esque transistor. Unless ampere has improved its transistor I just don't see the similarity in the 900 gen vs 3000.
IIRC the 980 Ti came out about nine months after the 980. I don't know what similarities in Maxwell's architecture there are in Turing. Pascal was the one that was effectively a die shrink of Maxwell. The relatively minor jump from Pascal to Turing is attributed to real estate being taken up by RT and Tensor cores on a relatively minor die shrink, 12nm was an enhanced version of 16nm. Going to 7nm should be much larger.But wasn't the 980ti the first real introduction to maxwell?
I think nvidia has pretty much maxed out the maxwell-esque transistor. Unless ampere has improved its transistor I just don't see the similarity in the 900 gen vs 3000.
I guess sarcasm isn't allowed anymore :(
Also I would be the first one jumping up and down if it's a 3080 that outperforms a 2080Ti by that much. I am getting the 3080Ti or the 3090Ti/Super regardless of what card is in the benchmarks. Just seems people's expectations are out of wack lol when you expect the 3070 to outperform the 2080Ti and cost half as much it's unrealistic and there is no harm in calling it out for it's ridiculousness.
How much are these cards going to cost ffs?
I was watching the latest news rundown from GamersNexus earlier and they mentioned the cooling in the leaked pics of the Founders edition 3000 series cards apparently costs Nvidia around $150 each. For reference, he mentioned the more traditional cooling you'll usually find on more high-end cards (Strix, FTW, etc.) costs manufacturers around $50.
I mean even the 2080 Ti was 30+% faster than the 1080 Ti and that was basically on the same process while introducing RT+Tensor cores.
I wish I was in a "bind" that allowed me to purchase a second TITAN.
I'm not talking about AMD to diminish what NVIDIA has achieved until now. They are clearly in a position of leadership when it comes to GPU's.
Do some of you really think AMD is going to even attempt to compete in the high end market again? They've left that to Nvidia for so long I don't see the point. They'll continue to offer cheaper alternatives for mid and low-end and that's frankly fine to me.