What do you think could be the memory setup of your preferred console, or one of the new consoles?

  • GDDR6

    Votes: 566 41.0%
  • GDDR6 + DDR4

    Votes: 540 39.2%
  • HBM2

    Votes: 53 3.8%
  • HBM2 + DDR4

    Votes: 220 16.0%

  • Total voters
    1,379
Status
Not open for further replies.

JahIthBer

Member
Jan 27, 2018
10,409
Because one of these consoles is rumored to be 390+mm2, and adding CU's isn't nearly as detrimental to power draw as raising clocks speeds.

IE, the GPU will be more efficient per watt with more CU's and Lower clocks

If it's really 390mm than 40 CU's doesn't add up
I dunno if a next gen console can get 1.8ghz clocks with more than 40 CU's, but i also expect that a cut down 5700 is too weak for next gen & Sony/Microsoft are aware it's in truth a budget mid range GPU that is for some reason $450, they will likely want something with more punch, but who knows.
Likely the 1.8ghz leak isn't PS5 & that PS5 uses something with at least 48 CU's.
 
Oct 26, 2017
6,151
United Kingdom
No, he speculates that Navi12Lite and Navi21Lite could be Arden, he doesn't know if these are Arden.

According to hmqgg Arden and Argalus are not even Anaconda and Lockheart. Rather something else:

Arden and Argalus codenames are parallel to some other things.

But then he says:

I need to clarify this.

I don't know much about what Arden and Argalus mean, and ofc they're Xbox related codenames just like Brad Sams said.

Which is confusing because hmqgg is the first person to mention Arden and Argalus. Brad Sams even credits him in his videos.

Brad Sams only mentioned Anaconda, Lockheart, Maverick (XB SAD) and Anubis (Scarlett SoC).

As far as I can tell Arden and Argalus came from hmqgg.
 
Last edited:

Andromeda

Member
Oct 27, 2017
4,868
Why would consoles APUs be found on Desktop Linux commits ? It doesn't make sense to me. Were the PS4, Pro, XB1 and Scorpio APUs also found there ? Whatever those are, they are not home consoles APUs IMO.
 

Andromeda

Member
Oct 27, 2017
4,868
Arden has a non transparent bridge this is cloud related...
Well if those are meant to be servers, then it makes sense to find them on Desktop linux commits. But those APUs are not for home consoles.

New wild theory: all those Navi lite APUs could be aimed for cloud gaming, hence the gaming reference in the serials.
 

Dave.

Member
Oct 27, 2017
6,192
Like I've explained in the past. Power limitations only matter insofar as more cooling is required. GDDR6 doesn't need its own active cooling.
I think you're wrong on this.

Have you looked inside a PS4 Pro, an X1X, a recent high-end graphics card of any sort? In all of these things, the GDDR chips are directly connected with thermal pads to the very same heatsink as the APU (GPU in the case of a graphics card, obv.). They absolutely do make use of active cooling and contribute to the thermal load needing to be handled by the unit's cooling system.
 

TheRaidenPT

Editor-in-Chief, Hyped Pixels
Verified
Jun 11, 2018
6,015
Lisbon, Portugal
Hello there Navi 10 Lite:




D9Ug8adU0AESK3E


cc: Colbert , anexanhume


Does seem to be for consoles with that clock and naming scheme.. Things are becoming interesting
 
Oct 26, 2017
6,151
United Kingdom
Do we have any clue what that can be? Kits for xcloud maybe? Or something Surface related?

No idea.

Reminds me that brad sams reported that specifically xcloud has tf number above stadia's 10.7tf, so maybe they needed a rdna 2.0 chip based off the navi 20 to reach that on the server.

Dont expect a navi 20 derivative chip on a console everyone.

Navi 20, Navi 10 their all desktop parts derivative of Navi and RDNA. Console chips will be custom.

Arden has a non transparent bridge this is cloud related...

Apparently, so has Gonzalo afaicr.
 

eightg4

Member
Oct 31, 2017
138
Paris, France
There is something odd about the data we have so far. I don't doubt what any respectable reporters have heard, so I guess I'll say it's 60% likely that PS5 is more powerful at this point (my own random ass guess). But at the same time we have an enormous Anaconda die and we know MS was gunning for some sort of high end, premium console with the two console strategy.

So, if PS5 is more powerful, it needs to be VERY powerful. Not like 10TF or 11TF or maybe even 12TF. And that would feel like a significant shift in Sony's strategy. Back to the PS3 days in a way, very premium and likely very expensive. Obviously they are a lock to sell the most units next gen but I do wonder if they're giving MS a foothold for a $350 Lockhart to grab a bunch of sales while the PS5 sits at $500+.

Other possibilities:
- Sony is willing to lose a boatload of money on hardware
- The rumor is wrong
- The die size estimate is inaccurate

You miss a point. Teraflops is not relevant.

According to rumors and what we have in terms of intellectual property informations (not talking about raytracing rendering tech here), Sony seams to have build a console arround a very fast SSD (we talk about an SSD capable of theorically 10~20GB/s and no hardware controller) and very fast memory at low power (HBM2 or maybe 3).
This SSD itself is a big achievment if these bandwitdh are turning to be true. A low power memory architecture + a low power SSD tech (no controller avoid heat) allow you to uptempo the GPU.

It seams also Sony using Chiplet architecture contrary to Microsoft.
 

Jawmuncher

Crisis Dino
Moderator
Oct 25, 2017
38,902
Ibis Island
This is what I hope for PS5 and XB2 personally

" A CPU that's 32-bit & 733 MHz, custom Intel Pentium III Coppermine-based processor. Has a 133 MHz 64-bit GTL+ front-side bus (FSB) with a 1.06 GB/s bandwidth. The system has 64 MB unified DDR SDRAM, with a 6.4 GB/s bandwidth, of which 1.06 GB/s is used by the CPU and 5.34 GB/s shared by the rest of the system. "
 
Feb 26, 2018
2,753
This is what I hope for PS5 and XB2 personally

" A CPU that's 32-bit & 733 MHz, custom Intel Pentium III Coppermine-based processor. Has a 133 MHz 64-bit GTL+ front-side bus (FSB) with a 1.06 GB/s bandwidth. The system has 64 MB unified DDR SDRAM, with a 6.4 GB/s bandwidth, of which 1.06 GB/s is used by the CPU and 5.34 GB/s shared by the rest of the system. "

And what will MS call this beast?
XBOX 2020
 

Sekiro

Member
Jan 25, 2019
2,938
United Kingdom
And this just the calm before the storm of the big leaks. We are not ready for when jason clicks publish lol.

Holy shit that nuclear option would end this thread and crash ERA for hours, most of us would get banned during the all out argument while the remaining survivors will tell stories about how Jason destroyed the internet, and not just ERA, half the internet would implode.

Can't wait for it lol, i'll light a fire and have some fine wine while I watch the chaos from afar.
 

VX1

Member
Oct 28, 2017
7,007
Europe
Seems like the new navi 10 post by komachi further points at HBCC and the HBM + DDR4 rumor.

I wonder if that's just cause of Vega cards in dev kits,they all have HBM2 memory after all...
Anyway,the only thing that seems solid for several months now: Navi 10 Lite-Ariel is GPU part of Gonzalo and Gonzalo is classified by AMD as gaming device/console,not some desktop part.
 

Binabik15

Member
Oct 28, 2017
4,695
So we're missing out on actual rumours from a trustworthy source because people needed to be console warrior asshats. Great :/ And we're still arguing about power based on tea leaves-read intentions from Sony and PR from MS because people scared off the messenger.

I guess I'll skip reading this thread then until news pop-up that are worthy of a standalone thread.
 

chris 1515

Member
Oct 27, 2017
7,077
Barcelona Spain
https://www.resetera.com/threads/ne...-anaconda-dont-want-none.112607/post-21043234 but not sure is anexanhume dev ;) Edit: maybe it will use less tdp but also will have less bandwidth

We have a measure coming from TomHardware mesurement it is 16 watts for 8 Gb of GDDR6, 32 watts for 15 GB and 48 Gb for 24 GDDR6* but power consumption is linked to bus size and for a 384 bit I imagine between 48 to 58 watts and all of this without memory controller. It seems they are counting inside GPU power consumption from the tomhardware article...


siRzncE.png


It seems 8 Gb of HBM2 is 10 Watts and 16 Gb of DDR4 is nearly 7/8 watts, you have a non negligible gain for the same memory size. And HBM3 and DDR5 consume less...


* seems power consumption grows linearly with memory quantity seeing DDR4 measurement
 
Last edited:

sncvsrtoip

Banned
Apr 18, 2019
2,773
We have a mesure coming fromTom Hardware mesurement it is 16 watts for 8 Gb of GDDR6, 32 watts for 15 GB and 48 Gb for 24 GDDR6* but power consumption is linked to bus size and for a 382 bit I imagine between 48 to 58 watts and all of this without memory controller. It seems they are count inside GPU power consumption...


siRzncE.png


It seems 8 Gb of HBM2 is 10 Watts and 16 Gb of DDR4 is nearly 7/8 watts, you have a non negligible gain


* seems power consumption grows linearly with memory quantity seeing DDR4 mesurement
Ok but 8gb for gpu seems small for nexgen and reading from ddr4 is too slow for gpu so you get less power consumption for inferior resolution, nothing breathtaking
 
Last edited:

19thCenturyFox

Prophet of Regret
Member
Oct 29, 2017
4,321
About the HBM2 + DDR4 solution, I think the fact that we're dealing with a split memory pool will be a bigger issue than the HMB2 portion having slightly lower bandwidth than GDDR6. Sure there are supposedly some shenanigans going on to make the two memory pools look like one pool of memory and t the memory is supposed to be automatically allocated but what if a developer decides to use 12 GB or more of the 24 GB as VRAM? The Next-Gen Xbox would be at a rather big advantage in that case and that scenario doesn't sound all that unrealistic especially when we look at how the unified memory was used this gen.

EDIT:

I mean this would still be my preferred solution. Would make next gen more interesting and the tdp gains could be invested in higher clocks but there are some significant downsides.
 
Feb 26, 2018
2,753
About the HBM2 + DD4 solution, I think the fact that we're dealing with a split memory pool will be a bigger issue than the HMB2 portion having slightly lower bandwidth than GDDR6. Sure there are supposedly some shenanigans going on to make the two memory pools look like one pool of memory and t the memory is supposed to be automatically allocated but what if a developer decides to use 12 GB or more of the 24 GB as VRAM? The Next-Gen Xbox would be at a rather big advantage in that case and that scenario doesn't sound all that unrealistic especially when we look at how the unified memory was used this gen.
Yeah. It looks like gddr6 is a pretty safe solution for devs
 
Oct 26, 2017
6,151
United Kingdom
We have a mesure coming fromTom Hardware mesurement it is 16 watts for 8 Gb of GDDR6, 32 watts for 15 GB and 48 Gb for 24 GDDR6* but power consumption is linked to bus size and for a 382 bit I imagine between 48 to 58 watts and all of this without memory controller. It seems they are count inside GPU power consumption...


siRzncE.png


It seems 8 Gb of HBM2 is 10 Watts and 16 Gb of DDR4 is nearly 7/8 watts, you have a non negligible gain


* seems power consumption grows linearly with memory quantity seeing DDR4 mesurement

Memory chip power consumption also deoends on the speed of the chips (i.e. memory clocks) and thus voltages.
 

anexanhume

Member
Oct 25, 2017
12,918
Maryland
Thanks! From my quick reading, this (on a very rough level) means they'd be responsible for generating the clock signal, or otherwise maintaining sync?


Again, no official word from AMD but I think it'd be extremely surprising if they're limited that way.


Yes, adding 8 WGP would only require 36.4mm^2. But we know that Anaconda also has a wider memory bus than Navi 5700, so there's an extra 13.19mm^2 there too. Thus ~371mm^2 for 26 WGP. But that's assuming no RT hardware; if we add such, we're now talking about 381mm^2. Still theoretically possible, but I think there's another concern. In this setup you've got 30% more stream processors than 5700XT, but still the same amount of cache for them to share, along with other support silicon. I'm definitely no GPU designer, but I doubt there's that much wiggle room in how these pipelines are rightsized to each other. So either you shouldn't add quite as many WGPs per SE, or else you can but you also have to add the appropriate supporting hardware, which makes the overall size go up yet again.

It's not impossible, but I think this is the maximum to expect. And a little lower on the WGP count would probably be more plausible.

Correct. So clocking GPU core and the memory interface.
Do you think they will eventually switch to 5nm? Or will that be ignored, and they go straight to 3nm GAAFET instead?

It all depends on economics and whether the savings is worth the engineering effort.
Because RX5700 XT with 40 CU at 1.8Ghz already has a 225W TDP (With an estimated power consumption of 160W) and a die size of 251mm2.
People will have to explain where they go from here and add like 35% more CU when the GPU alone eats up 160W and the CPU might tap 40W.

As Rylen points out below, it's about balancing clock speed with CU count. The RX 570 uses 75W less than RX 590 with just 4 less CU and a couple hundred MHz lower clocks.

Because one of these consoles is rumored to be 390+mm2, and adding CU's isn't nearly as detrimental to power draw as raising clocks speeds.

IE, the GPU will be more efficient per watt with more CU's and Lower clocks

If it's really 390mm than 40 CU's doesn't add up
Forgive me if I missed something, but according to the Gonzalo leak it looks like the L3 cache on the CPU is half the size of the desktop parts. It makes sense to cut back there a bit since they're already a huge upgrade, and I got the sense from the DF video that the L3 takes up a lot of space. Is that something that everyone is already taking into account, or is it not enough of a difference on the scale of a console APU to matter?

I believe the decoder has no entry for the cache part of the string, so it is unknown.
Was this posted already??

By Komachi:

Navi10Lite
Navi12Lite
Navi21Lite

Also Navi10Lite has two GFX IDs. With Vega if a GPU had two GFX IDs, HBCC was enabled.

This could possibly mean that Navi10 has HBM??

Of course nothing is confirmed 100%



HBCC is beneficial regardless of RAM setup. Also I agree with others that the remaining two are likely Lockhart and Anaconda.

Why would consoles APUs be found on Desktop Linux commits ? It doesn't make sense to me. Were the PS4, Pro, XB1 and Scorpio APUs also found there ? Whatever those are, they are not home consoles APUs IMO.

Previous console APUs leaked out in a similar manner. Read the DF story on Gonzalo.
 

BreakAtmo

Member
Nov 12, 2017
12,987
Australia
About the HBM2 + DDR4 solution, I think the fact that we're dealing with a split memory pool will be a bigger issue than the HMB2 portion having slightly lower bandwidth than GDDR6. Sure there are supposedly some shenanigans going on to make the two memory pools look like one pool of memory and t the memory is supposed to be automatically allocated but what if a developer decides to use 12 GB or more of the 24 GB as VRAM? The Next-Gen Xbox would be at a rather big advantage in that case and that scenario doesn't sound all that unrealistic especially when we look at how the unified memory was used this gen.

EDIT:

I mean this would still be my preferred solution. Would make next gen more interesting and the tdp gains could be invested in higher clocks but there are some significant downsides.

The dev would be perfectly able to use that much as VRAM if they liked, and from what I gather, not only would the DDR4 be just fine for certain GPU tasks that don't need high bandwidth, but the crazy-fast SSD would make needing more than 8GB of HBM2 very unlikely since you'd need to engage in so much less buffering.
 

chris 1515

Member
Oct 27, 2017
7,077
Barcelona Spain
Memory chip power consumption also deoends on the speed of the chips (i.e. memory clocks) and thus voltages.

Here this is for 14 Gb GDDR6... 8 gb of HBM2 it comes from an other article 20 watts max for 16 Gb of HBM2. 16 Gb of DDR4 3200 I suppose are not consuming much more than DDR4 2800 and all the 16 Gb DDR4 seems under 7 watts. I took probably the worst case for DDR4.


Speaking with Buildzoid, we know that Vega: Frontier Edition's 16GB HBM2 pulls 20W max, using a DMM to determine this consumption
 
Last edited:
Status
Not open for further replies.