• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.
  • We have made minor adjustments to how the search bar works on ResetEra. You can read about the changes here.

brain_stew

Member
Oct 30, 2017
4,731
We actually know more about RDNA2's architecture than Ampere following Microsoft's Hot Chips presentation. There was nothing in there about improved IPC and other than the addition of RT hardware in the TMUs the architecture was an exact replica of RDNA1.

If Series X was faster than a 2080Ti we'd know about it already. So I'm not buying those rumours.

I do believe the 80 CU part will be faster than a 3070 in rasterization but I don't believe it will reach 3080 levels. Pricing is obviously going to be key as well, due to lower RT performance, poor brand prestige and crucially, a significant feature deficit, AMD need to price equivalent rasterization performance lower than Nvidia. If it's slower than a 3080 then that caps the maximum price at $500 in order for it to be competitive.

I still wouldn't personally buy a GPU with lower RT performance and no Tensor cores mind you but I understand why others would. If you can get ~25% more rasterization performance for the same price then there'll be a market for it.
 

Phonzo

Member
Oct 26, 2017
4,817


1/2

> RDNA2 is a step up in both Efficiency & IPC over RDNA1
> Anyone thinking 80CU Navi21 will be competing with 48SM GA104 is insane

Navi21 vs GA102
Navi22 vs GA104
Navi23 vs GA106

> There is no way Nvidia would've launched 3080 for 699$ if "Big Navi" was competing with 3070
2/2

> The rumored 3070 Ti 16GB is there for a reason
> The rumored 12GB GA102 is there for a reason
> The rumored 20GB 3080 is there for a reason

Navi22 should be closer to Xbox Series X GPU size/config. That would be a better match for RTX 3070 from a CU/SM count perspective





Navi21 & Navi22 are both in testing.

Im on the train. You just gotta show it before the 17th. Im impatient.
 

Rice Eater

Member
Oct 26, 2017
2,816
I'm planning on getting a new GPU, but I can't justify spending $500+ even if I have the money for it. So I'm waiting for the 3060 or whatever the AMD equivalent to that is. So I'm really hoping that the 3060 will basically be a 2080 for $350 and AMD can come in and give us the same performance while undercutting Nvidia by a little at $280-300. 🙏
 

Isee

Avenger
Oct 25, 2017
6,235
According to Igor from Igor's lab AIB partners do not even have a bill of materials for Big Navi and are not even close in being able to even estimate production. First launch, should it come late October early November would be AMD only with Partners maybe following December.
¯\_(ツ)_/¯

Not sure a price can be preemptively dropped if the product is still that early in development.
 

Deleted member 49611

Nov 14, 2018
5,052
i'd rather have 10GB GDDR6X than 12GB GDDR6.

anyway. unless these cards come out before November 19th i'm not really interested.
 

eathdemon

Banned
Oct 27, 2017
9,690
i'd rather have 10GB GDDR6X than 12GB GDDR6.

anyway. unless these cards come out before November 19th i'm not really interested.
I would also rathrer have all the extras nvidia is including like dlss. even if amd manage to tie on hardware power, nvidia's software stack gives them a edge amd doesnt have.
 

Spoit

Member
Oct 28, 2017
3,989
According to Igor from Igor's lab AIB partners do not even have a bill of materials for Big Navi and are not even close in being able to even estimate production. First launch, should it come late October early November would be AMD only with Partners maybe following December.
¯\_(ツ)_/¯

Not sure a price can be preemptively dropped if the product is still that early in development.
So definitely a paper launch then?
 
Oct 29, 2017
13,513
For all their various announcements in recent years AMD always has graphs comparing them to their competitors, nvidia has such a hold of the market that people has grown accustomed to measuring power using their cards as the scale, so I wouldn't be surprised if AMD can't announced these until they have at least the 3080 performance to measure against.

Not that they couldn't without them, but if they have a card that trades blows with a 3070 or 3080 the most effective marketing is to show those bars on screen. Comparing against themselves just wouldn't be the same, and the 2080ti is old news now.
 

tusharngf

Member
Oct 29, 2017
2,288
Lordran
16gb model and $50 cheaper price tag could attract many people. Imagine a 3080 competitor for $599 with 12gb - 16gb Vram. DLSS requires optimization and 10 out of 1 title supports DLSS. AMD just needs to put a cheaper and competitive product with more VRAM.
 

Isee

Avenger
Oct 25, 2017
6,235
So definitely a paper launch then?

Paper Launch means nothing available at release date.
So no, I do not think it will be a paper launch.

Now for availability? I do not think we'll see large quantities during launch month, but I also do not think we'll see many RTX 3000 cards initially.
 

Vex

Member
Oct 25, 2017
22,213
Everytime I see the word "navi" I think "Net Navi" from megaman battle network.
 

tusharngf

Member
Oct 29, 2017
2,288
Lordran
Paper Launch means nothing available at release date.
So no, I do not think it will be a paper launch.

Now for availability? I do not think we'll see large quantities during launch month, but I also do not think we'll see many RTX 3000 cards initially.
Founders edition always face this issue. Last time same thing happened 2080ti was out of stock for so many days. It's better we wait for other models. From my past experience, ZOTAC stock was always on track in my region. I never have seen their cards go out of stock that often.
 

GrrImAFridge

ONE THOUSAND DOLLARYDOOS
Member
Oct 25, 2017
9,675
Western Australia
16gb model and $50 cheaper price tag could attract many people. Imagine a 3080 competitor for $599 with 12gb - 16gb Vram. DLSS requires optimization and 10 out of 1 title supports DLSS. AMD just needs to put a cheaper and competitive product with more VRAM.

DLSS no longer requires per-game training as of 2.0, which, coupled with GTX presumably being a thing of the past, could very well see adoption steadily increase going forward to the point it's ubiquitous among games that use TAA or a derivative thereof. But you're right that it's slim pickings in the here and now.
 

tusharngf

Member
Oct 29, 2017
2,288
Lordran
DLSS no longer requires per-game training as of 2.0, which, coupled with GTX presumably being a thing of the past, could very well see adoption increase going forward to the point it's ubiquitous among games that use TAA or a derivative thereof. But you're right that it's slim pickings in the here and now.


you are right but DLSS has lots of noise and the objects miss a lot of details. DLSS 2.0 is a major stepping stone but it needs more refining. If AMD add fidelity FX and TAA solution to their GPU they can do well further. We need AMD to put pressure on nvidia to reduce the price of their cards. Lots of people are going to wait till NAVI news. Having extra Vram is always good for future generation games.
 

Wollan

Mostly Positive
Member
Oct 25, 2017
8,815
Norway but living in France
AMD could surprise by having DLSS equivalent running on normal shader cores. You don't need tensor cores (the advantage is that they use less wattage for the same job, and for Nvidia it is important to use what's otherwise dead silicon for gaming). With the high amount of Shader Cores they could dedicate just a few % to run the rather light-weight ML trained mathematical models such as DLSS. Would fit in with how they are doing RT as well which is not using as much dedicated silicon (as Nvidia) either.
 

mordecaii83

Avenger
Oct 28, 2017
6,862
16gb model and $50 cheaper price tag could attract many people. Imagine a 3080 competitor for $599 with 12gb - 16gb Vram. DLSS requires optimization and 10 out of 1 title supports DLSS. AMD just needs to put a cheaper and competitive product with more VRAM.
At some point, the extra VRAM doesn't help especially if you're planning on upgrading to the next series in 2 years or if you game at less than 4K.

Also RT is the biggest thing for me, I can't see any way AMD will match Nvidia's RT performance and that's why I won't even consider their cards.
 

R.T Straker

Chicken Chaser
Member
Oct 25, 2017
4,715
At some point, the extra VRAM doesn't help especially if you're planning on upgrading to the next series in 2 years or if you game at less than 4K.

Yeah.

Look at the terrible Radeon VII and it's 16GB VRAM as an example.

By the time they will matter the GPU will become obsolete. ( In this case it's already obsolete).
 

Deleted member 22585

User requested account closure
Banned
Oct 28, 2017
4,519
EU
In raw performance, I think AMD can get close to Nvidias offerings. Maybe they even bring some form of RT and smart IQ features to the table. I just think that the support and maturity of those features won't be on Nvidias level. Price will be interesting.
 

Sean Mirrsen

Banned
May 9, 2018
1,159
AMD could surprise by having DLSS equivalent running on normal shader cores. You don't need tensor cores (the advantage is that they use less wattage for the same job, and for Nvidia it is important to use what's otherwise dead silicon for gaming). With the high amount of Shader Cores they could dedicate just a few % to run the rather light-weight ML trained mathematical models such as DLSS. Would fit in with how they are doing RT as well which is not using as much dedicated silicon (as Nvidia) either.
It's not just wattage, the advantage of the optimized DLSS/Tensor package is that Tensor cores pack far more performance into the same die size. Like, an RTX 3070 is listed as having 20 TFLOPS of shader processing power, but 163 TFLOPS of Tensor processing power. Sure, Tensor probably takes up like half the die, but that's still at least eight times the processing capacity. Making DLSS run on shader cores, even if the entire die (in my imaginary 50/50 shader/tensor die scenario) were converted to run shaders, would make it run at a quarter of the speed. Meaning four times less potential improvement.

Meaning, ultimately, that for AMD to pack a DLSS approximation into shader cores, would require a disproportionately powerful and expensive GPU on their part.
 

Isee

Avenger
Oct 25, 2017
6,235
It's better we wait for other models. From my past experience, ZOTAC stock was always on track in my region. I never have seen their cards go out of stock that often.

My experience with the 2080Ti as well, I was only able to get my hands on one 3 months after release. Prices were also high for the "good" partner cards, a 2080Ti Trio X or Strix OC went for close to 1600€ for some time.
 
Oct 27, 2017
4,927
It's not just wattage, the advantage of the optimized DLSS/Tensor package is that Tensor cores pack far more performance into the same die size. Like, an RTX 3070 is listed as having 20 TFLOPS of shader processing power, but 163 TFLOPS of Tensor processing power. Sure, Tensor probably takes up like half the die, but that's still at least eight times the processing capacity. Making DLSS run on shader cores, even if the entire die (in my imaginary 50/50 shader/tensor die scenario) were converted to run shaders, would make it run at a quarter of the speed. Meaning four times less potential improvement.

Meaning, ultimately, that for AMD to pack a DLSS approximation into shader cores, would require a disproportionately powerful and expensive GPU on their part.
I'm still convinced that DLSS just having a pattern of pixels as the input for upscaling is too inefficient to be the future. It seems too much like a brute force solution and I feel like most games 10 years from now will use a more dynamic method that incorporates the game engine giving labels to each object on the screen (IE: red dragon #4 or the main characters pony tail). DLSS does have the advantage of being easy to implement but I think it'll go the way of SSAA, where better alternatives will come up.

I'm not an engineer at all though.
 

eonden

Member
Oct 25, 2017
17,087
I'm still convinced that DLSS just having a pattern of pixels as the input for upscaling is too inefficient to be the future. It seems too much like a brute force solution and I feel like most games 10 years from now will use a more dynamic method that incorporates the game engine giving labels to each object on the screen (IE: red dragon #4 or the main characters pony tail). DLSS does have the advantage of being easy to implement but I think it'll go the way of SSAA, where better alternatives will come up.

I'm not an engineer at all though.
I think you should look up how DLSS 2.0 works... because what you are saying you say "in 10 years" is more or less what DLSS is currentlly doing. DLSS is basically a super powered TAA using motion vectors to estimate how the game will look up in the future. Plus as TAA is basically in all modern engines, it shouldnt "die".

 

Mercury_Sagit

Member
Aug 4, 2020
333
I'm still convinced that DLSS just having a pattern of pixels as the input for upscaling is too inefficient to be the future. It seems too much like a brute force solution and I feel like most games 10 years from now will use a more dynamic method that incorporates the game engine giving labels to each object on the screen (IE: red dragon #4 or the main characters pony tail). DLSS does have the advantage of being easy to implement but I think it'll go the way of SSAA, where better alternatives will come up.

I'm not an engineer at all though.
DLSS is, at its core, a deep learning neural network and, by ver 2.0, game agnostic. So my guess is that object detection may already be a part of their algorithm. Unfortunately I can not verify this since the white paper for Turing only provides detail on DLSS 1.0, and even after white paper for Ampere is available publicly, I doubt that Nvidia will go into detail of how they constructed their neural network.
Edit: also with the new sparsity feature for tensor cores in Ampere arch, I suspect that Nvidia is attempting to increase the efficiency their DLSS algorithm this way as well. Conventionally, training a neural network requires iteratively feeding it training data, in which it modifies itself after each iteration. However one can choose to intentionally pruning nodes (cutting parameters) in every iteration, in order to reduce the complexity of the resulting algorithm with acceptable loss of accuracy.
tensor-core-1.jpg

During the inference phase (i.e using DLSS in games), this also has real performance benefit for users since Ampere arch is claimed to compute sparse matrices/tensors with double throughput than dense ones
tensor-core-2.jpg
 
Last edited:

Deleted member 22585

User requested account closure
Banned
Oct 28, 2017
4,519
EU
A little bit off topic, but is there any chance that Nvidia can get DLSS to work with every game, for example that the user can "feed" the game files himself to the AI?
 

mordecaii83

Avenger
Oct 28, 2017
6,862
A little bit off topic, but is there any chance that Nvidia can get DLSS to work with every game, for example that the user can "feed" the game files himself to the AI?
It's doubtful, currently DLSS needs 16k sources to train the AI and (I believe) more info from the game engine than just TAA by itself provides.
 

Deleted member 22585

User requested account closure
Banned
Oct 28, 2017
4,519
EU
It's doubtful, currently DLSS needs 16k sources to train the AI and (I believe) more info from the game engine than just TAA by itself provides.

So it'll continue to just be available for certain hand picked games. That's unfortunate. I guess Nvidia is getting in contact with the devs, devs make an offer how much they'll take for the implementation, Nvidia pays. So no wide spread adoption in sight.
 

Tacitus

Member
Oct 25, 2017
4,039
So it'll continue to just be available for certain hand picked games. That's unfortunate. I guess Nvidia is getting in contact with the devs, devs make an offer how much they'll take for the implementation, Nvidia pays. So no wide spread adoption in sight.

They're integrating the whole thing to Unreal Engine and 2.0 doesn't need to be trained for every game. So we could be seeing every future UE game support it soon-ish (it's still an opt-in beta per their page). Devs not using UE need to integrate it to their engine themselves but Nvidia has stated that it's not much more involved than implementing TAA IIRC.
 

elyetis

Member
Oct 26, 2017
4,556
It's doubtful, currently DLSS needs 16k sources to train the AI and (I believe) more info from the game engine than just TAA by itself provides.
DLSS 2.0 ended the need to train the AI network for each game.
CEZUC2w.jpg

www.nvidia.com

NVIDIA DLSS 2.0: A Big Leap In AI Rendering

Through the power of AI and GeForce RTX Tensor Cores, NVIDIA DLSS 2.0 enables a new level of performance and visuals for your games - available now in MechWarrior 5: Mercenaries and coming this week to Control.
Not that it mean that they can achieve getting it to work for every game with TAA just with the drop of a file or a driver update.
 

mordecaii83

Avenger
Oct 28, 2017
6,862
DLSS 2.0 ended the need to train the AI network for each game.
CEZUC2w.jpg

www.nvidia.com

NVIDIA DLSS 2.0: A Big Leap In AI Rendering

Through the power of AI and GeForce RTX Tensor Cores, NVIDIA DLSS 2.0 enables a new level of performance and visuals for your games - available now in MechWarrior 5: Mercenaries and coming this week to Control.
Not that it mean that they can achieve getting it to work for every game with TAA just with the drop of a file or a driver update.
Straight from the DLSS 2.0 description on the Nvidia website:


A special type of AI network, called a convolutional autoencoder, takes the low resolution current frame, and the high resolution previous frame, to determine on a pixel-by-pixel basis how to generate a higher quality current frame.


During the training process, the output image is compared to an offline rendered, ultra-high quality 16K reference image, and the difference is communicated back into the network so that it can continue to learn and improve its results. This process is repeated tens of thousands of times on the supercomputer until the network reliably outputs high quality, high resolution images.


Once the network is trained, NGX delivers the AI model to your GeForce RTX PC or laptop via Game Ready Drivers and OTA updates. With Turing's Tensor Cores delivering up to 110 teraflops of dedicated AI horsepower, the DLSS network can be run in real-time simultaneously with an intensive 3D game. This simply wasn't possible before Turing and Tensor Cores.

Edit: The whole thing is somewhat confusing because the picture you show states that they use non-game-specific images, but why do they need to download a new driver for each DLSS game if the AI isn't using game-specific training? Oh well.
 

jett

Community Resettler
Member
Oct 25, 2017
44,659
So what's the process behind getting DLSS support in a game?
 

eonden

Member
Oct 25, 2017
17,087
At the moment, you need to get in touch with Nvidia, as it's not yet publicly available. In the future, it should be available for all developers, and integrated into popular game engines.
Its already integrated into UE4. Nvidia just needs to whitelist it (probably a way to ensure quality of the DLSS and optimize the algorithm for the game).
 
Oct 27, 2017
3,587
Its already integrated into UE4. Nvidia just needs to whitelist it (probably a way to ensure quality of the DLSS and optimize the algorithm for the game).

I know Nvidia have a fork of UE4 that has a bunch of their tech in it. I'm not sure whether it's intended to be a pull request for Epic or it's just out of convenience for Nvidia to demonstrate their stuff.
 

DieH@rd

Member
Oct 26, 2017
10,568
So what's the process behind getting DLSS support in a game?
timestamped
youtu.be

GTC 2020: DLSS - Image Reconstruction for Real-time Rendering with Deep Learning

In this talk (https://developer.nvidia.com/gtc/2020/video/s22698), Edward Liu (https://twitter.com/edliu1105) from NVIDIA Applied Deep Learning Research delv...

It's not trivial, even if the game is already kinda-ready for it [if it has support for TAA].

There was talk that nvidia needs to approve each game, since developer also has to provide lots of 16k screenshots for nvidia supercomputers for training. So even if some dev wants it, they have to go through this process.
 

GrrImAFridge

ONE THOUSAND DOLLARYDOOS
Member
Oct 25, 2017
9,675
Western Australia
you are right but DLSS has lots of noise and the objects miss a lot of details. DLSS 2.0 is a major stepping stone but it needs more refining. If AMD add fidelity FX and TAA solution to their GPU they can do well further. We need AMD to put pressure on nvidia to reduce the price of their cards. Lots of people are going to wait till NAVI news. Having extra Vram is always good for future generation games.

You're thinking of DLSS 1.x. 2.0, while not perfect and can still struggle with temporal stability, is a big step up in the areas you mention. "DLSS can look better than native + TAA" isn't hyperbole but rather an accurate appraisal of just how much Nvidia has improved the tech.
 

tuxfool

Member
Oct 25, 2017
5,858
I'd like to believe that'll end up being the case, but I don't see it happened. There challenges presented by DX12 and Vulkan might still drive folks back to DX11 where the driver is still doing a lot of the work.

If the abstraction layer you mentioned is that easy to implement and it's performant enough, then you're right, DX11 should be dead.
It's highly plausible. You need only look at DXVK on Linux, which converts dx11 to Vulkan. It is very performant.
 

Dream_Journey

Member
Oct 25, 2017
1,097
Why I feel like efficiency will be much better for AMD cards, Nvidia dropped the ball so hard about this, so bad.
 

Uhtred

Alt Account
Banned
May 4, 2020
1,340
timestamped
youtu.be

GTC 2020: DLSS - Image Reconstruction for Real-time Rendering with Deep Learning

In this talk (https://developer.nvidia.com/gtc/2020/video/s22698), Edward Liu (https://twitter.com/edliu1105) from NVIDIA Applied Deep Learning Research delv...

It's not trivial, even if the game is already kinda-ready for it [if it has support for TAA].

There was talk that nvidia needs to approve each game, since developer also has to provide lots of 16k screenshots for nvidia supercomputers for training. So even if some dev wants it, they have to go through this process.

This is no longer the case as has been pointed out like three times already.
 

Sabin

Member
Oct 25, 2017
4,623
One of Igor's recently published Ampere Video also had some interessting RDNA2 infos.

The AMD part starts at about 13:40min. According to him Big Navi with 275 watts is somewhere between 3070 and 3080 and possibly with more power consumption (300W+) somewhere around 3080 performance. Big Navi will not be able to attack the 3090. Take this with a lot of salt.

The AIB part starts at 15:10min He says the AIBs do not yet have a bill of material for the Big Navi cards. It takes roughly 3 month from the bill of material to product on the shelfs so every Big Navi card this year will come directly from AMD. He says if there somehow will be AIB cards this year then these will be rush jobs and come around christmas. Igor is very well connected within the industry, so this is as close a confirmation as one can get.

The last thing he said about AMD ist that they delay the Big Navi launch on purpose to the Ryzen launch so the CPU takes the spotlight. Salt for this one
 

Edgar

User requested ban
Banned
Oct 29, 2017
7,180
Why I feel like efficiency will be much better for AMD cards, Nvidia dropped the ball so hard about this, so bad.
efficiciency at what? and compared to what gpu, Maybe i missed something , but everyone and their grandma seems to praise nvia for prices, performance improvements and just general presentation of of the event
 

Tovarisc

Member
Oct 25, 2017
24,432
FIN
Why I feel like efficiency will be much better for AMD cards, Nvidia dropped the ball so hard about this, so bad.

If Igor is on point with below then AMD with RDNA 2 isn't looking to be any more efficient than NV's 3000 -series. Pushing same Wattage for same-ish rendering power.

One of Igor's recently published Ampere Video also had some interessting RDNA2 infos.

The AMD part starts at about 13:40min. According to him Big Navi with 275 watts is somewhere between 3070 and 3080 and possibly with more power consumption (300W+) somewhere around 3080 performance. Big Navi will not be able to attack the 3090. Take this with a lot of salt.

The AIB part starts at 15:10min He says the AIBs do not yet have a bill of material for the Big Navi cards. It takes roughly 3 month from the bill of material to product on the shelfs so every Big Navi card this year will come directly from AMD. He says if there somehow will be AIB cards this year then these will be rush jobs and come around christmas. Igor is very well connected within the industry, so this is as close a confirmation as one can get.

The last thing he said about AMD ist that they delay the Big Navi launch on purpose to the Ryzen launch so the CPU takes the spotlight. Salt for this one
 

mordecaii83

Avenger
Oct 28, 2017
6,862
If Igor is on point with below then AMD with RDNA 2 isn't looking to be any more efficient than NV's 3000 -series. Pushing same Wattage for same-ish rendering power.
Do we know if GDDR6X is more or less power hungry than GDDR6? It could partially explain why 3080 and 3090 are so much more power hungry than 3070.
 
Last edited: