• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.
  • We have made minor adjustments to how the search bar works on ResetEra. You can read about the changes here.

What do you think could be the memory setup of your preferred console, or one of the new consoles?

  • GDDR6

    Votes: 566 41.0%
  • GDDR6 + DDR4

    Votes: 540 39.2%
  • HBM2

    Votes: 53 3.8%
  • HBM2 + DDR4

    Votes: 220 16.0%

  • Total voters
    1,379
Status
Not open for further replies.

Pheonix

Banned
Dec 14, 2018
5,990
St Kitts
I have a guess for one of the areas of differentiation in the PS5 and Xbox Scarlett silicon for next generation that could end up being talked about in Digital Foundry videos and such. Both sides have made mention of 8K and 120 fps support, which is somewhat checking the box for HDMI 2.1 compliance. However, to even provide any options for such things and arguably even still dealing with 4K resolutions and 60 fps on many games that are demanding, some type of "checkerboarding" or similar approach is going to be in play.

  • PS5 - Digital Foundry and others have praised the approach to checkerboarding on the PS4 Pro, and I suspect that they will have some custom silicon (or "secret sauce" if you will) that provides support to an improved version of this intended to deal with resolutions all the way up to 8K as necessary.
  • Xbox Scarlett - Notice that Navi does not include hardware support for Variable Rate Shading (VRS) which was disappointing to many, and VRS is definitely another approach to deal with the issue of scaling to different resolutions like with checkerboarding. Microsoft has been working deeply on VRS with DirectX (https://devblogs.microsoft.com/directx/variable-rate-shading-a-scalpel-in-a-world-of-sledgehammers/), and I suspect that their custom silicon will support VRS.

I am not going to predict which hypothetical solution may end up being better, but this does seem to be a possible area of differentiation given no Navi support for VRS.
8K right along with 120fps is just a buzzword. Completely meaningless one. They might as well just have said we will have HDMI 2.1. But saying 8K and 120fps just sounds better. Those consoles will struggle to hit 4K 60fps and we will see something very similar to this gen with 1080p where they will both run @4K, 30fps for most games and run at 4K, 60fps for a handful of games. No matter what, there will always remain that tradeoff of how much better a game can look if it were at 30fps as opposed to it being at 60fps. I hope I am wrong on the 4K,60fps thing but I doubt that.

I do however expect all 30fps games to have two gfx modes; Resolution and frame rate. With one focusing on 4K,30fps at High settings and the other focusing on 4K.cb,60fps with medium to high settings respectively. Should become the norm if the hardware for CB is baked-into the base models so that way it becomes a part of the dev pipeline as opposed to something that as to be made specifically for one platform (eg. PS4pro).
 

Deleted member 12635

User requested account closure
Banned
Oct 27, 2017
6,198
Germany
110% 8k chat is because it's part of the 2.1 HDMI spec. Absolutely zero hardware silicon/work will be added to these machines in an effort to make possible.
110% --> over-utilized buzz word bingo!

The reason I am resistant to that is because people are using it to go all revisionist history on their 12-14TF predictions.
This indeed!


-------------------------------------

⁉ Thoughts ⁉

With up to 4 times the effective throughput you could argue that the moment developers will use Wave32 the average Performance Per Clock will improve again assuming that the numbers AMD provided were based on games still using Wave64.
 
Last edited:

TUAXK

Member
Jun 10, 2019
14
I think project scarlett will definitely become navi 20. as it reads on the amd page

"" This processor builds upon the significant innovation of the AMD Ryzen™ "Zen 2" CPU core and a "Navi" GPU based on next-generation Radeon™ RDNA gaming architecture including hardware-accelerated raytracing. ""

* Based on next-gen Radeon. *

The film from Nvidia is not quite right. Somebody did the effort and calculated everything. with the power consumption
@ 18:20 ~

 
Last edited:

bcatwilly

Member
Oct 27, 2017
2,483
8K right along with 120fps is just a buzzword. Completely meaningless one. They might as well just have said we will have HDMI 2.1. But saying 8K and 120fps just sounds better. Those consoles will struggle to hit 4K 60fps and we will see something very similar to this gen with 1080p where they will both run @4K, 30fps for most games and run at 4K, 60fps for a handful of games. No matter what, there will always remain that tradeoff of how much better a game can look if it were at 30fps as opposed to it being at 60fps. I hope I am wrong on the 4K,60fps thing but I doubt that.

I do however expect all 30fps games to have two gfx modes; Resolution and frame rate. With one focusing on 4K,30fps at High settings and the other focusing on 4K.cb,60fps with medium to high settings respectively. Should become the norm if the hardware for CB is baked-into the base models so that way it becomes a part of the dev pipeline as opposed to something that as to be made specifically for one platform (eg. PS4pro).

Man, you are probably right but I was hoping for 60 fps to be the next gen game standard. I will say that it is interesting that Sony chose to make the SSD the star of their Wired reveal, and Jason from Kotaku even noted that he had heard that "zero loading times" was expected to be a major marketing line by Sony for the PS5. And then you have Xbox where Phil Spencer has repeatedly talked about frame rates and overall system performance balance being the major Scarlett design goals every time he has talked about their next generation hardware. Of course the CPUs themselves will by default provide a nice boost to the situation, but at least going by things publicly so far I tend to think that Microsoft may put more of an emphasis on frame rates for their first party games at least (maybe mandate at least the types of modes you are citing?).
 

BreakAtmo

Member
Nov 12, 2017
12,838
Australia
Same - they should focus on 60 fps instead with these new CPUs and throw as much GPU in there as they can for that 60.

God, if only. The CPU jump should absolutely allow them to go to 60fps while also seeing huge gains in simulation complexity and whatnot. I'm just worried that the GPU power that will also be needed might go straight to the visuals, even if they do use reconstruction techniques.

Designing all games to run at reconstructed 2160p60 would just be the best, IMO. And do 120fps for older games that can manage it.
 

Straffaren666

Member
Mar 13, 2018
84
⁉ Thoughts ⁉

With up to 4 times the effective throughput you could argue that the moment developers will use Wave32 the average Performance Per Clock will improve again assuming that the numbers AMD provided were based on games still using Wave64.

For most use cases Wave64 will probably still be the most efficient mode for games, since there seems to be dependency stalls in Wave32 mode. If I understand it correctly, the major benefit of using Wave32 over Wave64 is the ability to run twice as many concurrent waves in Wave32 mode since it's using half the amount of resources. That's beneficial when running shaders with high register pressure (>64 VGPRs), but that's not the typical use case for games.
 

BreakAtmo

Member
Nov 12, 2017
12,838
Australia
I was just playing Xenoverse 2, and I was thinking of another small way that SSD loading could improve a game. Whenever I'm playing a quest and a character transforms, it's done by pausing the action, cancelling all moves that have been made (frequently screwing me out of some Ki or Stamina) and going into a cutscene where they transform at a predetermined place. When that's done , they've been teleported to that place and I have to fly over to them.

Now, I'm assuming that this is a result of having to load in the new character model with new abilities in a predictable way. Does that mean that a next-gen exclusive Xenoverse 3 could replace all this annoying rigmarole with just having characters transform into their other forms and gain new abilities entirely in real time with no pause to the action?
 

Putty

Double Eleven
Verified
Oct 27, 2017
931
Middlesbrough
Man, you are probably right but I was hoping for 60 fps to be the next gen game standard. I will say that it is interesting that Sony chose to make the SSD the star of their Wired reveal, and Jason from Kotaku even noted that he had heard that "zero loading times" was expected to be a major marketing line by Sony for the PS5. And then you have Xbox where Phil Spencer has repeatedly talked about frame rates and overall system performance balance being the major Scarlett design goals every time he has talked about their next generation hardware. Of course the CPUs themselves will by default provide a nice boost to the situation, but at least going by things publicly so far I tend to think that Microsoft may put more of an emphasis on frame rates for their first party games at least (maybe mandate at least the types of modes you are citing?).

Honestly, I applaud your excitement but think you're overthinking the CPU marketing.
 

bcatwilly

Member
Oct 27, 2017
2,483
Honestly, I applaud your excitement but think you're overthinking the CPU marketing.

Fair enough, as everyone in this thread is probably guilty of overthinking things since we don't have all of the information yet :) But I still stand by the fact that these companies have certain overall design goals and focus when they set about doing these things even though the base is pretty much the same with AMD, and so far at least Xbox has really focused on the frame rate performance and such when discussing things (first mentioned by Phil Spencer at last E3 even).
 

Deleted member 12635

User requested account closure
Banned
Oct 27, 2017
6,198
Germany
The main take-away of the performance of RDNA vs GCN (red rectangle)

AjqSby2.png
 
Last edited:
Oct 27, 2017
7,139
Somewhere South
For most use cases Wave64 will probably still be the most efficient mode for games, since there seems to be dependency stalls in Wave32 mode. If I understand it correctly, the major benefit of using Wave32 over Wave64 is the ability to run twice as many concurrent waves in Wave32 mode since it's using half the amount of resources. That's beneficial when running shaders with high register pressure (>64 VGPRs), but that's not the typical use case for games.

From the slides, Wave64 has potentially worse dependency stalls, actually.
 

Hey Please

Avenger
Oct 31, 2017
22,824
Not America
Previously AMD called a Shader Engine a whole slice of the GPU.
gcn-architektur-2.png


There was one Geometry Processor and Rasterizer, then the Shader Array consisting of a certain number of CUs (up to 16 under GCN) and the Rendering Backends (Up to 4 Render Backends/16 ROPs).
One Issue about the Shader Engine design for GCN was that with more CUs it was harder for the Shader Engine to feed them all if the workload was too short and the CUs would compute it faster than the SE could feed them.
amdrate7nuxr.jpg


The diagram for Navi10 shows actually no logical connection as to why it would be two Shader Engines.
In comparison to earlier it would be 4 Shader Engines (the blocks in red).
17-1080.a19d7014.jpg


Under Navi 10 one slice seems to be 1 Prim Unit, 1 Rasterizer, a certain number of CUs (5 WGPs/10 CUs under Navi10), 4 Render Backends (16 ROPs) and 128KB of L1$ (at least for Navi10).
The Wave Launch Rate wasn't specified but AMD probably made sure that this wouldn't be an issue.

Thank you for this. So from what I understand the cluster comprising of the following is considered to be 1 WGP?:

  • 2 CU (which share the following)
    • Vector Registers
    • Stream Processors
    • Schedulers
    • Scalar Units
    • Scalar Registers
    • Scalar Data Cache
    • Shader Instruction Cache
    • TMU
    • Texture Filter Units
    • Local Data Share (which presumably is one of the L0 or L1 cache?)
 

Straffaren666

Member
Mar 13, 2018
84
The main take-away of the performance of RDNA vs GCN (red rectangle)

AjqSby2.png
Slide 14.
I am of the same opinion as Miniature Kaiju

kbwih7F.png

I agree with you, based on that slide it doesn't seem to be any advantage to execute in Wave64 mode. In Wave32 mode there are four cycles of dependency stalls and in Wave64 there are two cycles of dependency stalls. But to make a fair comparison we have to compare two concurrent Wave32 waves and one Wave64 wave. In that case the scheduler will be able to issue a salu instruction on the other Wave32 wave, eliminating the first salu stall and two valu instructions in the three cycle valu stall, resulting in only one valu cycle stall in Wave32 mode and two in Wave64 mode.
 

isahn

Member
Nov 15, 2017
990
Roma
Current devkits likely have Vega 10/20 in them which can easily be at 12-14 Tflops. Navi in the retail units won't though.
indeed. From a dev point of view the API and in general the development environment of a new platform it's the most important thing to get an early hands on. And I'm sure that a Zen2+Vega PC can support the Scarlett/PS5 dev-env just fine.

Q: Impossible, hw ray-tracing and alien SSD?!
A: demoted to a software backend the one and PCI3 SSD for the other!
 

M.Bluth

Member
Oct 25, 2017
4,257
⁉ Thoughts ⁉

With up to 4 times the effective throughput you could argue that the moment developers will use Wave32 the average Performance Per Clock will improve again assuming that the numbers AMD provided were based on games still using Wave64.
That was something I meant to ask about... For PC games, could we see a significant improvement for Radeon cards going forward if devs implement this?
If so, it'd be great for AMD in closing the gap between them and Nvidia in gaming performance.
 

Straffaren666

Member
Mar 13, 2018
84
The main take-away of the performance of RDNA vs GCN (red rectangle)

AjqSby2.png

For most use cases the throughput will be the same though. In this case, the per work-item IPC is only meaningful when there isn't enough concurrent waves to hide the latency. Generally, GCN will complete four times the work in four times the time, compared to RDNA.
 

Locuza

Member
Mar 6, 2018
380
Thank you for this. So from what I understand the cluster comprising of the following is considered to be 1 WGP?:

  • 2 CU (which share the following)
    • Vector Registers
    • Stream Processors
    • Schedulers
    • Scalar Units
    • Scalar Registers
    • Scalar Data Cache
    • Shader Instruction Cache
    • TMU
    • Texture Filter Units
    • Local Data Share (which presumably is one of the L0 or L1 cache?)
Consisting is a better term than "share" because certain logic like the Schedulers, Scalar Units, TMUs and the L0$ are not shared between two CUs but work either only per CU or SIMD32.

The Local Data Share is not a cache but scratchpad memory which is manually managed by the application, for example with Compute Shaders under DX11/12.
It's a seperate memory structure under GCN and RDNA:
GCN.png

27-1080.039a6b52.jpg



Nvidia calls it Shared Memory which from an implementation standpoint changed multiple times from Fermi to Turing.
Pascal had it as a seperate structure being 96KB large (for 128 vALUs) and Turing has a unified memory structure which can be configured as 64KB Shared Memory + 32KB L1$ (the default mode for games) or as 32KB Shared Memory and 64KB L1$ (for 64 vALUs).
 

Straffaren666

Member
Mar 13, 2018
84
That was something I meant to ask about... For PC games, could we see a significant improvement for Radeon cards going forward if devs implement this?
If so, it'd be great for AMD in closing the gap between them and Nvidia in gaming performance.

No, that IPC comparison is grossly misleading and will only occur in the most pathological use cases.
 

dgrdsv

Member
Oct 25, 2017
11,885
With up to 4 times the effective throughput you could argue that the moment developers will use Wave32 the average Performance Per Clock will improve again assuming that the numbers AMD provided were based on games still using Wave64.
The effective throughput of the CU isn't changing from GCN really. What's changing is the latency of instruction issue and the execution pipeline depth.
It was 4 16-wide SIMDs running each one wave over 4 clocks so you'd get 4x64=256 results in 4 clocks.
Now it's 2 32-wide SIMDs running each one wave over 1 clock so you get 2x32=64 results in 1 clock or 64x4=256 results in 4 clocks.
The fact that you can issue new instructions each clock and you get wave processed in one clock means a lot for pipeline stalls - the result of that can already be seen in 5700XT presumably beating Vega64 despite the latter having ~40% more processing power.
 

dgrdsv

Member
Oct 25, 2017
11,885
Considering that the new generation is starting at about twice the GPU power of mid-gen upgrades of the previous generation, I really doubt that we'll see many 60 fps titles.
People keep thinking for some reason that faster CPU is the only thing you need to get to 60.
 
Oct 27, 2017
7,139
Somewhere South
Considering that the new generation is starting at about twice the GPU power of mid-gen upgrades of the previous generation, I really doubt that we'll see many 60 fps titles.
People keep thinking for some reason that faster CPU is the only thing you need to get to 60.

+1

We'll see 60 fps games, we'll possibly see more 60 than the current gen, but I don't think those will be anywhere near as ubiquitous as people seem to think (and definitely won't be an "across the board" thing).
 
Oct 25, 2017
17,904
I just wouldn't go in expecting 60fps across the board either way. Some devs just don't do 60fps games regardless. There is more going into that decision than hardware.
 

Pheonix

Banned
Dec 14, 2018
5,990
St Kitts
Man, you are probably right but I was hoping for 60 fps to be the next gen game standard. I will say that it is interesting that Sony chose to make the SSD the star of their Wired reveal, and Jason from Kotaku even noted that he had heard that "zero loading times" was expected to be a major marketing line by Sony for the PS5. And then you have Xbox where Phil Spencer has repeatedly talked about frame rates and overall system performance balance being the major Scarlett design goals every time he has talked about their next generation hardware. Of course the CPUs themselves will by default provide a nice boost to the situation, but at least going by things publicly so far I tend to think that Microsoft may put more of an emphasis on frame rates for their first party games at least (maybe mandate at least the types of modes you are citing?).
Shouldn't be a problem for them if they are really pushing to continue supporting the XB1 or their platform free thingy.
 

Fafalada

Member
Oct 27, 2017
3,066
  • PS5 - Digital Foundry and others have praised the approach to checkerboarding on the PS4 Pro, and I suspect that they will have some custom silicon (or "secret sauce" if you will) that provides support to an improved version of this intended to deal with resolutions all the way up to 8K as necessary.
To be fair - Pro has other custom features that are related to resolution manipulation that haven't been widely talked about - part because they are usually associated with VR (not at all the only thing they are useful for), part because anything that requires highly custom code-path on a tiny install base of the premium consoles just doesn't get a lot of use, period.
But VRS is IMO orthogonal to all of this - it's not an either or scenario, having control over shading granularity is relevant to more than just output resolution IMO (speaking hypothetically here, as any serious real research work here is still in its infancy).
 

anexanhume

Member
Oct 25, 2017
12,913
Maryland
It's 220w TBP, 180w TDP

QaKbGMP.jpg
Worth noting that part of the reason Nvidia is so much more efficient here is clocks in the 1600 MHz range for 2060 and 2070. Unfortunately, AMD has to clock Navi faster to keep up with these cards due to their architectural advantages. We can bet consoles will likely be 1600 MHz or below (my guess is 1400-1500) and that they'll sit at a happier place on the efficiency curve. They'll have to offset that with more CUs if they want to equal the above parts, though.
 
Oct 25, 2017
1,760
None of us in this thread thought we'd see clocks that high until the Gonzalo leak. These parts have an impeccable track record of ending up in consoles. And frankly, there's several people in this thread just as knowledgeable on this subject as Richard, if not moreso.

That leak, which may or may not be related to PS5, and may or may not be representative of final, sustained clocks doesn't hold much water, imo, when the fact of AMD and Navi's capabilities in that area are in full view now. Everyone that is putting their faith in that leak is also expecting consoles to reach near-parity with nigh-on equivalent dGPUs, even though the former will be going into final production only months after the latter launches.

Not just Gonzalo but the leaks about PS5 devkits running even around 1850 Mhz not just 1800 Mhz.

What leaks?
 

anexanhume

Member
Oct 25, 2017
12,913
Maryland
That leak, which may or may not be related to PS5, and may or may not be representative of final, sustained clocks doesn't hold much water, imo, when the fact of AMD and Navi's capabilities in that area are in full view now. Everyone that is putting their faith in that leak is also expecting consoles to reach near-parity with nigh-on equivalent dGPUs, even though the former will be going into final production only months after the latter launches.

Provisional clocks in engineering samples go up in final products, not down.

The leak is almost assuredly linked to a next gen console. AMD has no other known semi-custom projects, and these chips following this numbering scheme have invariably ended up in consoles. Skepticism is always healthy, but dismissing a source of information because it doesn't agree with your personal beliefs just puts your bias on display.

Even if it reached those clocks and it was a Navi 10 die, it wouldn't hit the 10TF threshold many are hoping for. And as far as timeline goes, the Navi dies that are in shipping products this July and the dies that are going into next gen consoles will probably be separated in manufacture dates by an entire calendar year.
 

Lashley

<<Tag Here>>
Member
Oct 25, 2017
60,015
Considering that the new generation is starting at about twice the GPU power of mid-gen upgrades of the previous generation, I really doubt that we'll see many 60 fps titles.
People keep thinking for some reason that faster CPU is the only thing you need to get to 60.
Yup. Setting themselves up for disappointment.
 

anexanhume

Member
Oct 25, 2017
12,913
Maryland
Considering that the new generation is starting at about twice the GPU power of mid-gen upgrades of the previous generation, I really doubt that we'll see many 60 fps titles.
People keep thinking for some reason that faster CPU is the only thing you need to get to 60.
4K60 is really fucking hard. Very few desktop GPUs can manage that, and it's title dependent then.
 
Status
Not open for further replies.