- it runs in serial so the CUs are stalled while it is running. This is ok because it's really fast at doing what it does - but it's still a few ms per frame out of your budget
This isn't an issue obviously otherwise you wouldn't get any speed ups with DLSS.
- the tensor cores also take up silicon area so for any given sized GPU you could argue you'd get more CUs for the same size chip if you didn't have tensor cores.
Which would push your power consumption up (more active SIMDs = more power) leading to chip clocking lower which would likely even out the additional SIMDs and net you what's more or less the same performance but without the new ML h/w - and without DLSS.
does it increase latency is rendering frames? I dont have DLSS card but hows the performance when it makes up for the textures?
Any additional part of rendering a frame increase latency as latency is the inverse of performance. So if you add DLSS on top of your regular frame rendering then you'll get slower performance and higher latency. Since DLSS is running on top of rendering in lower-than-native resolution though the cost of DLSS itself is well below the advantage you get by rendering in lower resolution - so the latency is obviously lower than without DLSS, as it would be with anything which significantly increases the performance really.
I think Nvidia would know the performance already and will launch according to their schedule, and if AMD do end up beating them, well Nvidia could just launch another card and call it a Super or whatever.
Launch a new card in place of an older one in a month from launching that older one? Not a good business practice.
Isn't RDNA2 supposed to be on TSMC's 7nm+ (or at least an improved version of the 7nm process they used for RDNA1)?
Wouldn't there be a pretty sizeable difference in transistor densities if NV went with Samsung's 8nm?
RDNA2 is N7 "enhanced" which is basically N7 with tweaks it got since it launched in 2018. N7+ is EUV and so far it doesn't look like any GPU or CPU vendor is going to use that over N7.
Densities is a product of design as much as production process so it's hard to say what will happen on two different production lines with two vastly different designs.
What does it matter anyway? Even if AMD will have a density advantage I doubt that N7 wafer pricing will allow them to take advantage of that by building a gaming chip as huge as GA100 for example - they'd need to sell it for some thousands of dollars. And with processes being different - with Samsung's 8nm being potentially cheaper per transistor than TSMC's N7 - even the size difference won't matter much - cost per transistor is what matters for margins as well as comparative design complexities at the same performance levels.
That being said I do think that AMD has a chance of competing with top end Ampere cards, for the first time since, I dunno, Tahiti vs GK104? Hawaii was way too power inefficient against GM204 and Fury hasn't really managed to beat GM200. And top end Pascals and Turings were just completely out of reach.