• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.
  • We have made minor adjustments to how the search bar works on ResetEra. You can read about the changes here.

What do you think could be the memory setup of your preferred console, or one of the new consoles?

  • GDDR6

    Votes: 566 41.0%
  • GDDR6 + DDR4

    Votes: 540 39.2%
  • HBM2

    Votes: 53 3.8%
  • HBM2 + DDR4

    Votes: 220 16.0%

  • Total voters
    1,379
Status
Not open for further replies.

GhostTrick

Member
Oct 25, 2017
11,415
Correct. So clocking GPU core and the memory interface.


It all depends on economics and whether the savings is worth the engineering effort.


As Rylen points out below, it's about balancing clock speed with CU count. The RX 570 uses 75W less than RX 590 with just 4 less CU and a couple hundred MHz lower clocks.




I believe the decoder has no entry for the cache part of the string, so it is unknown.


HBCC is beneficial regardless of RAM setup. Also I agree with others that the remaining two are likely Lockhart and Anaconda.



Previous console APUs leaked out in a similar manner. Read the DF story on Gonzalo.

First, RX 570 has a TDP rated to 120W and RX 590 of 175W. As for power consumption, I rather compare the RX 570 to 580. RX 590 is pushing far beyond its optimal clocks in terms of thermals. Or even the RX 480 to the RX 470. You're taking two extreme exemples to explain a difference in power consumption.

Compare the RX 5700 to XT.
The difference in thermals isn't that extreme for comparable clocks.

Some people here are talking about adding 35% more CU at the same clocks as the RX 5700 XT for the same power consumption.
 

DavidDesu

Banned
Oct 29, 2017
5,718
Glasgow, Scotland
This is going to be a really stupid question, but it just popped up in my head, given the talk about PS5 having this 8GB pool of fast RAM and the rest slower.. but how much RAM is used or can be used to actually render a frame? Obviously the answer will be varying but I mean, is there a limit? Can the GPU only draw from a limited amount of RAM to render a frame. Can it use all of it or is there a power or bandwidth limit it will hit first. I'm guessing RAM is only utilised in fairly small parts to render a frame, with the rest being used to store all sorts of other information that needs to be ready at a moments notice, but isn't all utilised simultaneously.

I know NOTHING besides what I gleam from forums here like this, clearly.

Just trying to work out if Sony's solution of a shared pool of varying memory, combined with super fast streaming off SSD will be at a disadvantage or not compared to just a huge lump of GDDR6 when it comes to real world rendering.
 
Jan 17, 2019
964
Dude.. No one is taking his quote as fact lmao. We're talking about the marketing and wondering why they're confident Anaconda is going to be more powerful.

Didn't you took Klob's provided document as fact ( alongside with hmqgg's posts ) that Anaconda being more powerful is set in stone? Yeah, theirs statements makes sense, but Reiner's tweets doesn't makes sense at all. Right

I seriously don't know what you're doing. Some people in here are seriously need to stop reading shit that isn't there. No where did I say that lol.

And some people are reading books from Reiner's tweets lol.

And yet some people are reading books from Klob's HOLY provided document and hmqgg's posts. That's OK, i presume. I smell a hypocrisy here from a 100 miles.
 
Last edited:

anexanhume

Member
Oct 25, 2017
12,918
Maryland
First, RX 570 has a TDP rated to 120W and RX 590 of 175W. As for power consumption, I rather compare the RX 570 to 580. RX 590 is pushing far beyond its optimal clocks in terms of thermals. Or even the RX 480 to the RX 470. You're taking two extreme exemples to explain a difference in power consumption.

Compare the RX 5700 to XT.
The difference in thermals isn't that extreme for comparable clocks.

Some people here are talking about adding 35% more CU at the same clocks as the RX 5700 XT for the same power consumption.
Sorry, I was referencing TBP. I specifically reference RX 590 because its TBP matches RX 590, suggesting it's GPU is pushed similarly hard.

I would really like to see a 5600 with 36 CUs and lower clocks to compare.

HBM2 hype

PS4 got leaked out from a desktop linux commit ? are you sure you are following the thread closely ?

Previous codenames were found via commits and/or 3DMark results, as stated in DF article. Yup, pretty sure I can read. Thanks for checking.

Furthermore, if you read the DF article, you can see he expresses the exact same skepticism and provides a potential answer, too.
 
Last edited:

Nivek

Banned
Apr 24, 2019
85
User warned: console warring
Fact is xbots leaked a document and blocked out the Teraflops number since it us lower than PS5.
 

GhostTrick

Member
Oct 25, 2017
11,415
Sorry, I was referencing TBP. I specifically reference RX 590 because its TBP matches RX 590, suggesting it's GPU is pushed similarly hard.

I would really like to see a 5600 with 36 CUs and lower clocks to compare.



Previous codenames were found via commits and/or 3DMark results, as stated in DF article. Yup, pretty sure I can read. Thanks for checking.


The RX 5700 has 36CUs and lower clocks. Rated at 180W of TBP.
The RX 5700 XT had 40CUs and higher clocks. Rated at 225W of TBP.
 

anexanhume

Member
Oct 25, 2017
12,918
Maryland
The RX 5700 has 36CUs and lower clocks. Rated at 180W of TBP.
The RX 5700 XT had 40CUs and higher clocks. Rated at 225W of TBP.
I'm aware. There is no comparable to RX 570 in terms of TBP, which is what I'm talking about. If you get another ~30W back by clocking 125MHz lower and disabling 4 CU like 570 does to 580, then those clocks make a hell of a lot more sense for a console. The Xbox One X clocks are very close to RX 570 and it has more CUs than any Polaris GPU.

In summary:
RX 590: 36 CU, 1545/1468 MHz core, 7000 MHz memory, 225W TBP.

RX 570: 32 CU, 1244/1168 MHz core, 6000 MHz memory, 150W TBP.

5700 XT: 40 CU, 1800 MHz core (gameclock), 14000 MHz memory, 225W TBP.

5600???: 36 CU, 1500 MHz core, 14000 MHz memory, ~160W TBP???
 
Last edited:
Oct 25, 2017
17,933
I can only assume this one.
D5CFFQSXoAIsF-r.jpg:large
 

GhostTrick

Member
Oct 25, 2017
11,415
I'm aware. There is no comparable to RX 570 in terms of TBP, which is what I'm talking about. If you get another ~30W back by clocking 125MHz lower and disabling 4 CU like 570 does to 580, then those clocks make a hell of a lot more sense for a console. The Xbox One X clocks are very close to RX 570 and it has more CUs than any Polaris GPU.


Right, but then again, it's about a modulation to get the same power. Not more.
People here are arguing that they'd add 14 (!!!) more CUs (for the record, Xbox One X has 2 more CUs than RX 580) and the same 1800mhz clockspeed as RX 5700 XT to reach 12.44 Tflops. Even at a lower clockspeed, like 1500mhz for 10.3Tflops. You're still adding 14 more CUs, and that doesn't make up for the loss of 300mhz.
 

modiz

Member
Oct 8, 2018
17,905
seems like komachi backtracked off the previous speculation that navi 12 LITE and navi 21 LITE are Arden.
 

Nightengale

Member
Oct 26, 2017
5,712
Malaysia
The whole "company X saw Y and then made a change to an an integrated semi-custom APU without any impact to its release date" is complete nonsense. This isn't a company just picking up a random GPU for sale and slotting it into a PCIE slot of a tower PC, this is a multi-year highly customised piece of electronics that has to be rigorously tested, ordered in bulks, and finalised in specifications well in advance.
 

anexanhume

Member
Oct 25, 2017
12,918
Maryland
Right, but then again, it's about a modulation to get the same power. Not more.
People here are arguing that they'd add 14 (!!!) more CUs (for the record, Xbox One X has 2 more CUs than RX 580) and the same 1800mhz clockspeed as RX 5700 XT to reach 12.44 Tflops. Even at a lower clockspeed, like 1500mhz for 10.3Tflops. You're still adding 14 more CUs, and that doesn't make up for the loss of 300mhz.
You save about 33% TBP in that scenario. They're talking about adding 35% more CUs. It will all come down to clocks.
 

GhostTrick

Member
Oct 25, 2017
11,415
I'm aware. There is no comparable to RX 570 in terms of TBP, which is what I'm talking about. If you get another ~30W back by clocking 125MHz lower and disabling 4 CU like 570 does to 580, then those clocks make a hell of a lot more sense for a console. The Xbox One X clocks are very close to RX 570 and it has more CUs than any Polaris GPU.

In summary:
RX 590: 36 CU, 1545 MHz core, 7000 MHz memory, 225W TBP.

RX 570: 32 CU, 1244 MHz core, 6000 MHz memory, 150W TBP.

5700 XT: 40 CU, 1900 MHz core, 14000 MHz memory, 225W TBP.

5600???: 36 CU, 1600 MHz core, 14000 MHz memory, ~160W TBP???


A few corrections here:

RX 590: 36 CU, 1545 MHz core, 225W TBP.

RX 580: 36 CU, 1340 MHz core, 185W TBP.

RX 570: 32 CU, 1244 MHz core, 150W TBP.


5700 XT: 40 CU, 1900 MHz core, 225W TBP.

5700 : 36 CU, 1900 MHz core, 180W TBP.

The 590 is the exception here. It's a model that is pushing far beyond it's expected optimal clock to reach higher performances.


I have troubles to see how one can add 14CU to the 5700XT while maintaining HIGHER performances... at the same or lower TDP.
 

Andromeda

Banned
Oct 27, 2017
4,864
...



Previous codenames were found via commits and/or 3DMark results, as stated in DF article. Yup, pretty sure I can read. Thanks for checking.

Furthermore, if you read the DF article, you can see he expresses the exact same skepticism and provides a potential answer, too.
And/or ? That's your answer to my specific question ?

I am sorry but you can't approximate "linux commits" with "3DMarks results". Both things are very different kind of 'leaks'.
 
Oct 26, 2017
6,151
United Kingdom
Can the passive aggressive platform wars posters please cease and desist?

This isn't the thread for that.

I'm at the point now where I think combining both Next-Gen consoles into one thread was a huge mistake. If folks can't play nice and be civil then maybe the mods should reconsider their position on it.

It becomes increasingly unbearable trying to sift through pages and pages of shitflinging posts just to try and find the few quality or insightful ones.

I would actually personally prefer a pure dedicated tech thread where if you're not there to contribute or learn you get banned.
 

anexanhume

Member
Oct 25, 2017
12,918
Maryland
And in that scenario, you're at 10.3 Tflops. Not 12.44 like some expects.
There's a possibility to have more CUs. But it'll be at lower clocks. To the point it'll nearly negate the performance gain.
10% higher performance at the same TDP is negation territory? The more pertinent question is if it justifies the die space. But if you already weren't going to clock it that high anyway, the perf gain is higher than 10%.

And/or ? That's your answer to my specific question ?

I am sorry but you can't approximate "linux commits" with "3DMarks results". Both things are very different kind of 'leaks'.
That's a lot of words to say you didn't read the article.
 

BreakAtmo

Member
Nov 12, 2017
12,960
Australia
Can the passive aggressive platform wars posters please cease and desist?

This isn't the thread for that.

I'm at the point now where I think combining both Next-Gen consoles into one thread was a huge mistake. If folks can't play nice and be civil then maybe the mods should reconsider their position on it.

It becomes increasingly unbearable trying to sift through pages and pages of shitflinging posts just to try and find the few quality or insightful ones.

I would actually personally prefer a pure dedicated tech thread where if you're not there to contribute or learn you get banned.

Welcome. You're a little late to the club, but we can read out the minutes.
 
Oct 26, 2017
6,151
United Kingdom




So,what the hell is "lite"...?!?


Komachi posts a lot of interesting stuff, but its far too often to distinguish his personal speculation from actual information.

I fully expect him to be wrong on a lot more than just this.

Welcome. You're a little late to the club, but we can read out the minutes.

Lol. But seriously tho, the mods should really reconsider.

The current thread isn't working. It's generating more bans than actual meaningful discourse.
 
Oct 25, 2017
17,933
Can the passive aggressive platform wars posters please cease and desist?

This isn't the thread for that.

I'm at the point now where I think combining both Next-Gen consoles into one thread was a huge mistake. If folks can't play nice and be civil then maybe the mods should reconsider their position on it.

It becomes increasingly unbearable trying to sift through pages and pages of shitflinging posts just to try and find the few quality or insightful ones.

I would actually personally prefer a pure dedicated tech thread where if you're not there to contribute or learn you get banned.
I would at least consider it.

It is a shame to even have to think about doing something like that but here we are.
 

GhostTrick

Member
Oct 25, 2017
11,415
10% higher performance at the same TDP is negation territory? The more pertinent question is if it justifies the die space. But if you already weren't going to clock it that high anyway, the perf gain is higher than 10%.


That's a lot of words to say you didn't read the article.


"nearly" as in "the expect gains nearly climbs back to the performance level of the original target". The RX 5700 XT has a peak performance of 10.13Tflops. 10.3 Tflops isn't 10% more. It's 2% more.

Erf nevermind, readed the 50th anniversary one by mistake.

9.75 for the regular one. So you get a 6% gain.
 

Outrun

Member
Oct 30, 2017
5,783
Can the passive aggressive platform wars posters please cease and desist?

This isn't the thread for that.

I'm at the point now where I think combining both Next-Gen consoles into one thread was a huge mistake. If folks can't play nice and be civil then maybe the mods should reconsider their position on it.

It becomes increasingly unbearable trying to sift through pages and pages of shitflinging posts just to try and find the few quality or insightful ones.

I would actually personally prefer a pure dedicated tech thread where if you're not there to contribute or learn you get banned.

I would go ahead and do that. It seems like a great idea.

Does anyone know what sort of cooling solution these machines will adopt?
 

VX1

Member
Oct 28, 2017
7,005
Europe


From that reddit post:

" [–]dad2youAMD [score hidden] 29 minutes ago*
"P" is for performance. PCI ID for Navi10 LITE was 13e9 - Ariel (PS5 SOC), codename Gonzalo. Navi12 LITE and Navi21 LITE might actually be related to another console, Scarlett, which is rumored to actually be 2 consoles. One would be 4-5TF (Navi12 LITE) and another would be 10-11TF (Navi 21 LITE). Navi 14 is certainly mobile part, but the other three I would say are console parts.
BTW there was revision of NAVI10_LITE. First was committed in January, and another in April (1GHz v 1.8GHz)."

"Actually, both PS4 and PS4Pro chips have been committed in 3DMark11 for example back in 2013/2015 - https://d2skuhm0vrry40.cloudfront.n....jpg/EG11/resize/690x-1/quality/75/format/jpg

I am 100% Gonzalo leak was console APU (PCI ID matched Ariel, which is PS5 codename and ID decoding showed "G" and Navi10 LITE, meaning console, not desktop."

More or less what we discussed here already.
 

modiz

Member
Oct 8, 2018
17,905
I found a fantastic tool to figure out the specs of the PS5.

InferKit


its an AI that creates a paragraph off a sentence that you write, now i can give everyone the real specs of the PS5!
Sony's PlayStation 5 specifications announced during the event, include a 2K HDR video-only display, an 8-core Intel i7 processor, 32GB of storage (up to 320GB in the US), 512GB of RAM, HDMI 2.0 out and AV output, dual front-facing speakers, a 5 megapixel rear camera unit, the ability to play up to 6 simultaneous videos on each TV.
Sony isn't going that far with the Sony Bravia line of 4K HDR TVs. This has resulted in Sony going with one product: the BQ Bravia. While the BQ Bravia was officially announced last month , it wasn't until late November that we got a good look at the 4K HDR features implemented in the Bravia 4K HDR set. This led to some mixed emotions at first; on one hand, it was impressive to see a company with such an established reputation on 4K HDR technology, and on the other hand, to see how they'd actually implement it into their own products (which seemed to be a mix of Sony and Microsoft).
Despite being a bit of a late-conclusion by itself, it certainly made me reflect back on my time at Sony, and especially my time with the Bravia 4K TVs, that this was an important product announcement in that I was pretty much just blown away by the sheer power of
 
Status
Not open for further replies.