• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.

Caz

Attempted to circumvent ban with alt account
Banned
Oct 25, 2017
13,055
Canada
The VRAM amounts are enticing, but those memory bus sizes do seem a tad on the small size. Hopefully AMD really puts the pedal to the medal on RDNA2. They really need to be competing with the RTX 3080 and not just the RTX 3070.
They need to be competing with the 3080 and the 3090. While the latter is more of a workstation card than a gaming GPU, especially if those recent leaks are a best-case scenario vs. the 3080, them not having a Pro Vega II equivalent with Big Navi for OEMs to adopt will result in them ceding a large chunk of the market to NVIDIA.
 

Simuly

Alt-Account
Banned
Jul 8, 2019
1,281
That's not true at all. At the level of a multiprocessor Ampere's improvement in RT over Turing is considerably higher than the rest of Ampere gains.

I'm talking about actual performance differences in games benchmarked, which is all that matters, and 3080 vs 2080 Ti. This is based on one source though:

youtu.be

GeForce RTX 3080 Ray Tracing & DLSS Performance, RTX ON vs OFF

AU Viewer MSI B550 & AMD Ryzen Deal: http://msi.gm/PCCGTomahawk3600DealMSI B550 Motherboards: http://msi.gm/B550TomahawkMotherboardJoin us on Patreon: https:...

"the gap between RTX on and RTX off (between 3080 and 2080 Ti) is just 10%". The performance improvement is so small because in some games, there is no difference (same percentage drop in performance by turning RT on) at all in the games they tested between a 3080 and 2080 Ti.
 

dgrdsv

Member
Oct 25, 2017
11,846
See my post earlier.

Makes no sense for AMD to compete against Microsoft for a directML model for upscaling.
Sigh.

A model (software) which is running on DML would "compete against MS" in the same way a game running on DX12 would compete with DX12.

I'm talking about actual performance differences in games benchmarked
This has zero meaning for RT h/w as these games aren't limited by that.

That video has all of its conclusions wrong.
 

Simuly

Alt-Account
Banned
Jul 8, 2019
1,281
Sigh.

A model (software) which is running on DML would "compete against MS" in the same way a game running on DX12 would compete with DX12.


This has zero meaning for RT h/w as these games aren't limited by that.

That video has all of its conclusions wrong.

The point is, in games tested, the difference in RT performance/difference in performance impact from RT between Turing and Ampere is very small.
 

Dictator

Digital Foundry
Verified
Oct 26, 2017
4,930
Berlin, 'SCHLAND
The point is, in games tested, the difference in RT performance/difference in performance impact from RT between Turing and Ampere is very small.
I do not think that is the case at all. Ampere handles RT better than Turing.

You merely have to look at the performance differential between the 2080 and 3080 in a fully rasterised game vs. a path traced one to see that.
 

Edgar

User requested ban
Banned
Oct 29, 2017
7,180
i feel like that always was the case? Nvidia goes for faster gpus but with less vram, while AMD goes for slower ones with more vram that cost less?
 

Buggy Loop

Member
Oct 27, 2017
1,232
Sigh.

A model (software) which is running on DML would "compete against MS" in the same way a game running on DX12 would compete with DX12.

Welp, I do say it's possible AMD has a model.. but why. The purpose of using Dx12u from a developer's perspective is to call the instructions and be hardware agnostic. You think they would like to have to train an AI on an AMD model and then a model on Nvidia and then Intel...? Makes much more sense that Microsoft gives a model.

It would never be used. The game developers also would want the DirectML to be hardware agnostic
 

dgrdsv

Member
Oct 25, 2017
11,846
The point is, in games tested, the difference in RT performance/difference in performance impact from RT between Turing and Ampere is very small.
Performance impact comes from additional shading which is required to incorporate any RT calculations.
Shading is done as shading is usually done and has no relation to RT h/w.
When RT h/w is the limiting factor Ampere is considerably more faster than Turing in comparison to workloads where it's not.
So again, their conclusions in this video are all wrong.

Welp, I do say it's possible AMD has a model.. but why. The purpose of using Dx12u from a developer's perspective is to call the instructions and be hardware agnostic. You think they would like to have to train an AI on an AMD model and then a model on Nvidia and then Intel...? Makes much more sense that Microsoft gives a model.
It doesn't matter how you train the model for inferencing. You only need compatible and capable (in performance) h/w to run the latter, its vendor is irrelevant.

It would never be used. The game developers also would want the DirectML to be hardware agnostic
DML is h/w agnostic.

Again, don't confuse the API - which is DirectML - with the software which runs through it. DLSS is the software, NGX (which it runs on currently) and DML are APIs. To have something like DLSS AMD must develop this software - or use someone else's software instead which would run on DML which they will support with enough features and performance to be compatible with it. MS could make such software, sure. Will they though and how well will it run on GPUs without dedicated ML h/w is a different question.
 
Last edited:

Raydonn

One Winged Slayer
Member
Oct 25, 2017
919
I've heard that AIB partners can only launch with Navi 22 products this year, while AMD will exclusively launch the Navi 21 products.


Is the closest thing I've heard so far. Prices can fluctuate until they actually announce it, just like Nvidia.
 
Nov 2, 2017
2,275
The point is, in games tested, the difference in RT performance/difference in performance impact from RT between Turing and Ampere is very small.
I do not think that is the case at all. Ampere handles RT better than Turing.

You merely have to look at the performance differential between the 2080 and 3080 in a fully rasterised game vs. a path traced one to see that.
Exactly, the only way to make this claim is to check a game that only uses raytracing.

We conclude our ray tracing performance analysis with one of the only fully path-traced games on the market: Quake 2 RTX. Path tracing is RT in its purest form, so it's no surprise to learn that it's more computationally expensive than supplementing traditional rasterised rendering with effects like ray traced shadows, reflections or global illumination. So while Quake 2 was released back in 1997, the path-traced version of the game is one of the hardest games on the market to run, even with dedicated RT hardware.

So how does the new card do? Well, the RTX 3080 manages just 65fps at 1440p, but the RTX 2080 Ti and 2080 fare even worse - with average frame-rates of 34 and 44fps, respectively. That means we're looking at a performance advantage of 47 per cent for the 3080 over the 2080 Ti, which shoots up to 94 per cent when we compare the 3080 against the 2080.
https://www.eurogamer.net/articles/digitalfoundry-2020-nvidia-geforce-rtx-3080-review?page=6

So you have a 47% gain in RT performance here. Both the 2080Ti & 3080 have the same amount of RT cores and I think more or less the same effective clocks. Of course if you're going use a game where the GPU spends 80% on rasterization & 20% on RT then you're not going to see a lot of performance gain overall even from a 50% bump in RT performance.

Say you spend 80 seconds of a GPU's rendering time on a frame for the rasterization part & 20 seconds for the RT part. What happens when you double RT performance? Well, you spend 10 seconds instead of 20 seconds on it. Overall you'll spend 90 seconds on rendering the frame instead of 100 or in other words you gain about 10% overall performance. For some reason a lot of people are calling that 10% the gain in RT performance but that's just wrong. Weird that a big tech youtuber would make that mistake. Seem like a bad conclusion from HU.
 

dgrdsv

Member
Oct 25, 2017
11,846
Say you spend 80 seconds of a GPU's rendering time on a frame for the rasterization part & 20 seconds for the RT part. What happens when you double RT performance? Well, you spend 10 seconds instead of 20 seconds on it.
RT (BVH testing) runs in parallel to shading even on Turing so it's even less than that.
You have two workloads running in parallel. One is finishing in 10 seconds and the other in 20. Your performance is limited by the second one.
Now you have new h/w where the first workload finishes in 3 seconds and the second one in 10. Will you see 3X gain or 2X gain?
 

Caz

Attempted to circumvent ban with alt account
Banned
Oct 25, 2017
13,055
Canada

In terms of specifications, Navi 21 is Big Navi, which integrates up to 80 groups of CUs, which is 2560 stream processors, matching 256bit wide video memory, and is expected to be named RX 6900 series.
 

Simuly

Alt-Account
Banned
Jul 8, 2019
1,281
I do not think that is the case at all. Ampere handles RT better than Turing.

You merely have to look at the performance differential between the 2080 and 3080 in a fully rasterised game vs. a path traced one to see that.

I would like to see some more tests then. 10% difference in HU tests of actual relevant games is small.
 

Dictator

Digital Foundry
Verified
Oct 26, 2017
4,930
Berlin, 'SCHLAND
I would like to see some more tests then. 10% difference in HU tests of actual relevant games is small.
Go look at the work Richard did for the RTX 3080 Review and compare its percentage uptick in a pure path traced game vs. a purely rasterised one. Sm for SM Ampere is quite a bit faster at doing RT workloads.

Fully rasterised/compute game:
gearsyjkix.png


Fully path traced/ray traced game:
q2rtx3ojkc.png
 

Simuly

Alt-Account
Banned
Jul 8, 2019
1,281
Go look at the work Richard did for the RTX 3080 Review and compare its percentage uptick in a pure path traced game vs. a purely rasterised one. Sm for SM Ampere is quite a bit faster at doing RT workloads.

Fully rasterised/compute game:
gearsyjkix.png


Fully path traced/ray traced game:
q2rtx3ojkc.png

Thanks interesting will watch the full vid.
 

Serpens007

Well, Tosca isn't for everyone
Moderator
Oct 31, 2017
8,127
Chile
The good thing for those of us that will stay in the midrange is that we at least know that AMD puts out good competition there. I'm quite confident that the waiting game will benefit me since competition may bring down the 3060 price, or put something better for the same price-range so I'll just wait. Excited anyway though
 

mordecaii83

Avenger
Oct 28, 2017
6,860
So if this card has a 256bit bus and 16GB GDDR6 RAM, that doesn't bode well for it being a 3080 competitor at all. It seems like something with as much power as the 3080 would be massively constrained by bandwidth that low.

So if these leaks are true, the 6900XT is either A: Not a 3080 competitor, or B: bandwidth starved compared to the 3080.
 
Last edited:

jett

Community Resettler
Member
Oct 25, 2017
44,653
What's this I hear about Navi supporting a maximum of 80 CUs? What's that about?
 

platocplx

2020 Member Elect
Member
Oct 30, 2017
36,072
Go look at the work Richard did for the RTX 3080 Review and compare its percentage uptick in a pure path traced game vs. a purely rasterised one. Sm for SM Ampere is quite a bit faster at doing RT workloads.

Fully rasterised/compute game:
gearsyjkix.png


Fully path traced/ray traced game:
q2rtx3ojkc.png
thank you!
 

HMD

Member
Oct 26, 2017
3,300
Hopefully this kills the insane demand on 3080s a bit... Please let me pre-order one.
 

Locuza

Member
Mar 6, 2018
380
I am still so freaking confused when it comes to Navi 21 specs.

It is 80 CUs and 96 ROPs?

Or 80 CUs and 64 ROPs

Or not even 80 CUs?
According to the firmware data from AMD, N21 has 80 CUs which would be 5120 "shader cores".
EeRMFpPXgAAff0g

https://twitter.com/_rogame/status/1289239501647171584

The amount of ROPs is even, 4 Render Backends per Shader-Engine, so either 64 or 128 ROPs.
It's likely that AMD changed the amount of ROPs per RB and is going with 128 ROPs.

I drew N21 with 2048-Bit HBM2(e), which fits to the amount of L2$ tiles (16 texture channel caches for 16 memory channels).
(On a side note, Arcturus doesn't make a lot of sense with 16)
Otherwise it would be a 256-Bit GDDR6-Interface, where rumours from RedGamingTech have it that "128MB Infinity Cache" would balance it out, which I'm not inclined to believe.
Ehphb2LXgAQcIMe
 

SharpX68K

Member
Nov 10, 2017
10,514
Chicagoland
According to the firmware data from AMD, N21 has 80 CUs which would be 5120 "shader cores".
EeRMFpPXgAAff0g

https://twitter.com/_rogame/status/1289239501647171584

The amount of ROPs is even, 4 Render Backends per Shader-Engine, so either 64 or 128 ROPs.
It's likely that AMD changed the amount of ROPs per RB and is going with 128 ROPs.

I drew N21 with 2048-Bit HBM2(e), which fits to the amount of L2$ tiles (16 texture channel caches for 16 memory channels).
(On a side note, Arcturus doesn't make a lot of sense with 16)
Otherwise it would be a 256-Bit GDDR6-Interface, where rumours from RedGamingTech have it that "128MB Infinity Cache" would balance it out, which I'm not inclined to believe.
Ehphb2LXgAQcIMe


Thanks!

It's possible that the difference in ROPs is the different Navi 21 models i.e. the RX 6900 and 6900XT.


I see, thank you also.
 

Tora

The Enlightened Wise Ones
Member
Jun 17, 2018
8,639


9th November - Media Reviews
11th November - Card Launch

Dates can change of course
 

Spoit

Member
Oct 28, 2017
3,976
The good thing for those of us that will stay in the midrange is that we at least know that AMD puts out good competition there. I'm quite confident that the waiting game will benefit me since competition may bring down the 3060 price, or put something better for the same price-range so I'll just wait. Excited anyway though
If the Navi 22 still has 40 CUs, how much of an improvement will it really have over the 5700xt?
 

shark97

Banned
Nov 7, 2017
5,327
If the Navi 22 still has 40 CUs, how much of an improvement will it really have over the 5700xt?


I dont believe they'd have a step from 80CU's to 40 CU's with nothing in between. That would be ridiculous. But RDNA 2 could offer: Boost clock at least up to 2.2 ghz of PS5 (vs 1.9 of 5700XT)+if rumors are to be believed (salted), 50%+ IPC gains.
 

Atolm

Member
Oct 25, 2017
5,826
What are we expecting in terms of performance for Navi 21? Between the 3070 and 80?
 

maximumzero

Member
Oct 25, 2017
22,906
New Orleans, LA


9th November - Media Reviews
11th November - Card Launch

Dates can change of course


Oh hey, November 11th is my birthday.

All these cards are probably gonna be priced higher than what I'm willing to spend though.

I see the RX 5600 XT coming up on Newegg as a deal tomorrow, if it hits $275ish I think I'll probably just nab that for now.
 
Last edited:

civet

Member
Jul 6, 2019
460
France
November 11th would be nice. I'd quite like to replace my 480 by a 6600. Just before Cyberpunk comes out would surely be good.
 

SolidSnakeUS

Member
Oct 25, 2017
9,594
Yeah, the 5700xt was a midrange part, and that had really rough availability

Didn't Nvidia say they started mass masking the 3080/3090 in August? If AMD already started making the cards and it comes out mid-ish November, they will probably have a much larger stock. Not only that, with AMD being a different chip process, it should not be fighting for any resources with Nvidia.
 

dgrdsv

Member
Oct 25, 2017
11,846
Didn't Nvidia say they started mass masking the 3080/3090 in August? If AMD already started making the cards and it comes out mid-ish November, they will probably have a much larger stock. Not only that, with AMD being a different chip process, it should not be fighting for any resources with Nvidia.
Yeah, it will fight for resources with Apple, other AMD products (CPUs) and the rest of the industry instead. NV is pretty much alone as a big customer of Samsung's 8nm process.
There are indications that Navi 21 will be in very scarce supply this year, one of them being an apparent lack of AIB cards until 2021.
Also if AMD would have started production already they would've definitely launched it against Ampere, in any quantities. Having even a dozen cards on the market is better if it's able to compete - people would at least know this and hold on from purchasing 3080s.
 

Yopis

Banned
Oct 25, 2017
1,767
East Coast
Yeah, it will fight for resources with Apple, other AMD products (CPUs) and the rest of the industry instead. NV is pretty much alone as a big customer of Samsung's 8nm process.
There are indications that Navi 21 will be in very scarce supply this year, one of them being an apparent lack of AIB cards until 2021.
Also if AMD would have started production already they would've definitely launched it against Ampere, in any quantities. Having even a dozen cards on the market is better if it's able to compete - people would at least know this and hold on from purchasing 3080s.


Thought new Qualcomm chips were with Samsung also.
 

Simuly

Alt-Account
Banned
Jul 8, 2019
1,281
Yeah, it will fight for resources with Apple, other AMD products (CPUs) and the rest of the industry instead. NV is pretty much alone as a big customer of Samsung's 8nm process.
There are indications that Navi 21 will be in very scarce supply this year, one of them being an apparent lack of AIB cards until 2021.
Also if AMD would have started production already they would've definitely launched it against Ampere, in any quantities. Having even a dozen cards on the market is better if it's able to compete - people would at least know this and hold on from purchasing 3080s.

Not sure, they do seem confident when it does eventually release:

 

kami_sama

Member
Oct 26, 2017
6,998
Not sure, they do seem confident when it does eventually release:


I don't you should take the word from someone inside AMD lol
Also, while Nvidia's launch was a mess, it wasn't a paper launch. Just too much demand.
I do not think AMD's launch will be a paper one, but the same thing will happen.
 

bloodyroarx

The Fallen
Oct 27, 2017
3,859
Ontario, Canada

Simuly

Alt-Account
Banned
Jul 8, 2019
1,281
I don't you should take the word from someone inside AMD lol
Also, while Nvidia's launch was a mess, it wasn't a paper launch. Just too much demand.
I do not think AMD's launch will be a paper one, but the same thing will happen.

I do not think there has been much stock of Ampere, depends on what constitutes a paper launch really.