at best, it'll be 50% more performance. the 2080 Ti is about that compared to the 2060 Super (which is half the 2080Ti)Jesus, this AMD card sounds like it'ss monster if it's twice as fast as a 5700 XT
at best, it'll be 50% more performance. the 2080 Ti is about that compared to the 2060 Super (which is half the 2080Ti)Jesus, this AMD card sounds like it'ss monster if it's twice as fast as a 5700 XT
I see. If we take into account RDNA2's 50% perf-per-watt improvement over RDNA1 then theoretically this card could be anywhere from 30% to 50% more powerful than the 2080ti so yeah, if the math is right this can trade blows with high end Ampere.I meant RDNA2 in general. Let's see if these 80 CUs can destroy 3080Ti with ease, that's the more apt comparison here.
If they delivery that with more power than a 3080 and for less money I don't care about dlss.
Ooh that sounds really powerful.I'd say three bananas taped together. End to end, so it's really long.
close to 19.5 TFLOPs, I think (assuming same clocks as the 5700XT). the 2080 TI is 14TFLOPs EDITL 17TFLOPs as per dampflokfreund
6 games support dlss. We have no idea if most of games will use the tech.But DLSS is essentially a free 50% performance improvement for the 3080. Maybe not across all games, but I'm willing to bet it will be true of a lot of demanding AAA games.
2080 Ti is around 17 Tflops.close to 19.5 TFLOPs, I think (assuming same clocks as the 5700XT). the 2080 TI is 14TFLOPs
as long as Nvidia keeps sponsoring games, those will use DLSS. and they'll definitely will keep opening the wallet for the big games6 games support dlss. We have no idea if most of games will use the tech.
And I doubt if the whole market adopt that to prolong the life of your gpu Nvidia will not charge to subscribe for the dlss service...
A Series X on top of a Lockhart.
close to 19.5 TFLOPs, I think (assuming same clocks as the 5700XT). the 2080 TI is 14TFLOPs EDITL 17TFLOPs as per dampflokfreund
the 2080 Ti is 17TFLOPs. as for a 24TFLOP AMD, maybe that was a server card? Arcturus will have 8192 cores (128CU) max, but that competes with the A100 and isn't meant for gamingHoly shit... Series X's GPU out specced inside the year lol. I had heard Moorslawisdead talk about a 24tflop AMD GPU a couple of months ago, is that bs or a more powerful version of the one you're talking about being 17tflops?
Most would be aside from Nvidia shills but AMD needs to get their act together on temperatures and good updates. Even to this day the 5700XT has issues that date back to when it was released.If Nvidia can finally get some real competition, I will be happy.
Super Saiyan Blue Gamecube
If you clocked this one at PS5 clocks you get ~23TF.Holy shit... Series X's GPU out specced inside the year lol. I had heard Moorslawisdead talk about a 24tflop AMD GPU a couple of months ago, is that bs or a more powerful version of the one you're talking about being 17tflops?
nah, 2x the core count. these don't scale linearly2x 5700xt plus some efficiency gains for rdna 2 plus some clock increases=2.5x 5700xt performance maybe? yikes, nvidia has their hands full.
of course it's all speculation at this point.
Too much memory for games, not enough support form 3D applications that would be able to actually utilize it. AMD GPUs always in a weird spot.
If you clocked this one at PS5 clocks you get ~23TF.
Probably a little too much, but who knows.
No one cares about 1.0 it's defunct. With DLSS, everyone means 2.0.
As to game supporting DLSS 2.0, how many games support AMD's current upscaling implementation?
This is about the future. DLSS means an essentially free 50% improvement in frame rates. That's going to be hard to overcome in raw hardware performance.
It supposedly has twice the CUs
Doesn't mean it will scale linearly
AMD's lack of research time and money
from a professional standpoint or a hobbiest standpoint? cause there's a separate line of cards for thatToo much memory for games, not enough support form 3D applications that would be able to actually utilize it. AMD GPUs always in a weird spot.
yea it's possible and likely, but we'll have to wait and see. this could cause power draw to go crazyILikeFeet
Yeah I think this was his calculations. If you can clock a GPU at 2GHz in a console sized box (bigger than usual but still) then I don't see why you can't go beyond that in a large powered and cooled PC tower.
How is that on NV though?
Edit: for clarification , and because you're not the poster who complained about DLSS being proprietary, AMD not having a ML implementation of their own isn't in any way shape or form NV's fault.
Seeing DLSS as some sort as exclusive feature BS when NV developed it and actually added hardware to speed it up makes no sense.
They could just use DirectML for their ML implementation, no proprietary issue anywhere.
Technically I don't think Nvidia added tensor cores to speed up DLSS - more like the tensor cores were added for professional users, and they came up with a way to make them useful in games.
I have no idea what most of these numbers mean. But as a cookie clicker veteran, I like these big numbers.
What's everyone's thoughts on price? I want to finish building a new PC, but I want to wait until the new AMD Big Navi cards drop so I can at least match next-gen performance at 4K. I know AMD is all about best power per dollar value and they are competing with Nividia, but do you think we are looking at the $600-$700 range? Or $500 or less range?
Oh please don't start these marketing posts for a technology that Nvidia is using exclusively... they don't need your help (their market gap overtook Intel) and the market needs competition.
DLSS 2.0 is great but it's not without flaws. The main problem is it is proprietary, very few games use it or will use it.
from a professional standpoint or a hobbiest standpoint? cause there's a separate line of cards for that
Hopefully they can make proper drivers for it and create a legit competition because AMD drivers in the last decade have been garbage.
Geez, relax people, next gen consoles will have 16-20 GBs of RAM in total of which some 2-4 GBs will be used by the OS and some 2-4 or so for game's logic which leaves us with 8-16 GBs of VRAM requirements and 12 is exactly in the middle.
Navi 21 should be a good product, a first one from AMD since 2013's Hawaii which will be capable of competing with NV's high end cards.When was the last time AMD leaks/predictions turned out to be true and not overblown? on the gpu side maybe 290?
When was the last time AMD leaks/predictions turned out to be true and not overblown? on the gpu side maybe 290?