• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.

Overall maximum teraflops for next-gen launch consoles?

  • 8 teraflops

    Votes: 43 1.9%
  • 9 teraflops

    Votes: 56 2.4%
  • 12 teraflops

    Votes: 978 42.5%
  • 14 teraflops

    Votes: 525 22.8%
  • Team ALL THE WAY UP +14 teraflops

    Votes: 491 21.3%
  • 10 teraflops (because for some reason I put 9 instead of 10)

    Votes: 208 9.0%

  • Total voters
    2,301
Status
Not open for further replies.

Deleted member 12635

User requested account closure
Banned
Oct 27, 2017
6,198
Germany
Looks like another answer from this forum to those leaks, but if it's true (numbers are sensible), that would put PS5 at $399 and Ana at $499

If Sony can idd make a +11TF SKU for $399, with the known CPU and SSD and sufficient RAM, they have a winner, no much reason for their current install base to jump ships.

That's why I think they're targeting $399 if at all possible.

Leak is suspicious for that same reason though, 'cause it paints to perfect a picture for Sony: 56CU an 1550Mzh is the exact number of CU and clock to just got above 11TF, (while using less than 60CU)
We call this rumor "The Eternal Pirate" from now on ...
 

chris 1515

Member
Oct 27, 2017
7,074
Barcelona Spain
Right now the official estimated GDDR6 price is 9$-12$, depends on what type. So what did the rumor use? Those numbers? The price that Sony would have received for GDDR6 if they would have used it?

The estimated price use the link I use. I use this for my calculations nothing coming from my head.

This is me posting the link for it

https://www.guru3d.com/news-story/gddr6-significantly-more-expensive-than-gddr5,3.html


https://www.gamersnexus.net/guides/3032-vega-56-cost-of-hbm2-and-necessity-to-use-it

Here some consumption estimation for HBM2 and compared to GDDR5.


And last consumption for DDR4

https://www.tomshardware.com/reviews/intel-core-i7-5960x-haswell-e-cpu,3918-13.html
 

Thorrgal

Member
Oct 26, 2017
12,279
Yes. Finally getting the truth in this thread. Also, I've always wondered whether people mean that the collective mass of primates will eventually have written all the pieces so that someone could intelligently come in and assemble them into something meaningful? Or do they mean that one monkey will eventually get it perfectly right and write an entire play correctly in sequence all by itself?

Cuz.. A monkey endlessly banging on one typewriter would probably take hundreds of years just to randomly type 5 pages in correct sequence. We might get through the heat death of the universe and then looped back around to bigbang2 before we get an entire play. Hmm.. this is a math problem that I could maybe do. But probability has a way of making me overestimate my chances.

Anyway, what were we arguing about?

The impossibility to replicate the greatness of Shakespeare randomly :P

Only way it would be possible if one of the variables would be infinite.
We call this rumor "The Eternal Pirate" from now on ...

Ha ha and a nice welcome to a new poster that's on the older side of the spectrum, and makes sensible posts to boot.
 

Andromeda

Member
Oct 27, 2017
4,844
...
I think it's a BIT of a major stretch to say the Mantis Burn Racing developers said they reached native 4K BECAUSE OF FP16. The devs merely said it was something that was easy to implement without too much problematic loss in quality. And in "some parts" you get a big performance increase. But.. they also "really didn't need to do anything to get Mantis Burn Racing running at full native 4K on Xbox One X at a solid 60fps" without FP16.

There are definitely places for FP16 in next gen, but I hesitate to overstate its impact. It will help in certain cases. It will not get a sub4k game to 4k. It will not raise 40fps performance to 60fps except perhaps in the most artificial of benchmarks. It can help, and I hope it's another tool that devs can use next gen. But a fundamental game changer it has not proved to be over the last 2 years.

Or maybe I'm wrong.
They implied it in another interview. One with DF:
The team joked about 'Mantis magic' before revealing that exploiting enhancements made to the PlayStation 4 Pro GPU have paid dividends.

There are definitely places for FP16 in next gen, but I hesitate to overstate its impact.

How can you hesitate to overstate its impact ? In others games we often see XBX pushing 100% more pixels, and in those 2 games, correctly using FP16 RPM on Pro, both versions run at the same res / settings with similar performance. With subsequent updates, performance of Mantis Burn Racing was greatly improved at 4K on Pro until it reached the status of solid 60fps, I remember NX Gamer updates about this.
 

M.Bluth

Member
Oct 25, 2017
4,240
I think that getting a 52% discount on HMB2 over AMD's which buys HBM chips in the millions sounds almost like a fairytale.
Uh... Dunno about a specific discount figure, but a console manufacturer is gonna sell tens of millions of consoles. AMD is lucky to sell a few hundred thousands cards...
 

DrKeo

Banned
Mar 3, 2019
2,600
Israel
The estimated price use the link I use. I use this for my calculations nothing coming from my head.

This is me posting the link for it

https://www.guru3d.com/news-story/gddr6-significantly-more-expensive-than-gddr5,3.html


https://www.gamersnexus.net/guides/3032-vega-56-cost-of-hbm2-and-necessity-to-use-it

Here some consumption estimation for HBM2 and compared to GDDR5.


And last consumption for DDR4

https://www.tomshardware.com/reviews/intel-core-i7-5960x-haswell-e-cpu,3918-13.html
Thing is, you are adding up a rumor (Sony got HBM2 for 40% more than GDDR6), an estimate (GDDR6 pricing table), and a guess (GDDR6 cost 40% less to Sony than OEM volume buyers like AMD). These kinds of calculation usually end up pretty far from reality.

Uh... Dunno about a specific discount figure, but a console manufacturer is gonna sell tens of millions of consoles. AMD is lucky to sell a few hundred thousands cards...

I'm sure they are, but less than half of what AMD pays? The same logic can be applied to GDDR6 in which in this case they can get 13Gb/s GDDR6 for 4.44$ per 1GB, that's 106.56$ for 24GB of GDDR6 or 16GB for 71.04$. Or how about 16GB of 14Gb/s GDDR6 with a 256-bit bus (448Gb/s) with an 8GB of DDR4 on a 128-bit bus? That's 89$ according to the same HMB2 rumor logic, the same amount Sony spent on memory for the PS4.
 
Last edited:

anexanhume

Member
Oct 25, 2017
12,912
Maryland
anexanhume, there was something else I'd like to know. The rumour said that the cost of the HBM2 would be reduced due to the InFO_MS making the interposer unneeded, correct? If that's the case, does that mean that HBM is going to just become cheaper everywhere, with InFO_MS replacing the interposer? Or is there some catch to it?
It has a practical die size limit. InFO is best suited to die under 400 mm^2 from the image I shared earlier. Of course, that will change over time.
 

edryr

Banned
Feb 15, 2018
126
How can you hesitate to overstate its impact ? In others games we often see XBX pushing 100% more pixels, and in those 2 games, correctly using FP16 RPM on Pro, both versions run at the same res / settings with similar performance. With subsequent updates, performance of Mantis Burn Racing was greatly improved at 4K on Pro until it reached the status of solid 60fps, I remember NX Gamer updates about this.


Comparing ps4 pro and X based on the fp16 is absolutely irrelevant. We don't know the level of optimisation or the time they spent on each platform.
FP16 can only be used on a selected and restricted number of operations/effects. Add to this the fact that the same operation have to be applied on 2 elements of a wavefront during the same cycle wich creates many many dependencies that doesn't exist with fp32, wich means you lose lots of efficiency, kind of what happened with previous amd architecture ( vliw 4 & 5 ).

From what i've tested so far, you can get around 20% more perfs using rpm against only fp32 AT BEST ( more in the 10-15% generally ), and only on selected scenes or shader effects.
 

chris 1515

Member
Oct 27, 2017
7,074
Barcelona Spain
Thing is, you are adding up a rumor (Sony got HBM2 for 40% more than GDDR6), an estimate (GDDR6 pricing table), and a guess (GDDR6 cost 40% less to Sony than OEM volume buyers like AMD). These kinds of calculation usually end up pretty far from reality.



I'm sure they are, but less than half of what AMD pays? The same logic can be applied to GDDR6 in which in this case they can get 13Gb/s GDDR6 for 4.44$ per 1GB, that's 106.56$ for 24GB of GDDR6 or 16GB for 71.04$. Or how about 16GB of 14Gb/s GDDR6 with a 256-bit bus (448Gb/s) with an 8GB of DDR4 on a 128-bit bus? That's 89$ according to the same HMB2 rumor logic, the same amount Sony spent on memory for the PS4.


Read the article I link. The 40% less is for an OEM buyer like AMD, it is between 20% and 40% of discount GDDR6 compared to spot price depending of the quantites. For Sony and MS I took the maximum because of the quantites they will use. Everything I talk about have a credible source. There is no guess on my part. Everything looks plausible. And for consumption scaling nearly lineraly with quantity it is at least true for DDR4 maybe GDDR6 or HBM2 follow different rules of physics but I doubt it. The few link and anexanhume link and DRAMEXchange spot price are enough to do an estimate of price and consumption, you can do your own one. I did the same discount price for DDR4 than in guru3d article no reason to not have discount price on DDR4 RAM too.

https://www.guru3d.com/news-story/gddr6-significantly-more-expensive-than-gddr5,3.html

The mentioned prices correspond to a purchase quantity of 2,000 pieces. Manufacturers of video cards are likely to buy larger quantities, which means they could get the parts cheaper. 3dcenter.org estimates the possible discount at 20 to 40 percent, the prices in the table below would only be estimates.

EDIT: The price they give is not the price for AMD maybe AMD will only have a discount of 20% and Sony a discount of 40%...

From the prices quoted, it can be calculated that total storage costs only about $ 35-45 (GDDR5) or $ 55-75 (GDDR6) for say 8GB GiB. Doubling the amount of storage to 16 GiB would also double the purchase cost, so the storage would cost around $ 70- $ 90 or $ 110-150. Cost factors such as assembly and even the more complex boards are not taken into account.

Using simple math at 40% discount 16Gb of GDDR6 cost 110 dollars, 8 Gb 55 dollars for 24 GB GDDR6 this is 165 dollars...

For HBM2 is 40% more than 8Gb GDDR6 is 77 dollars pretty simple and for 16 GB DDR4 I took the spot price on DRAM exchange and applie a discount of 40%. No fancy maths...
 
Last edited:

Pheonix

Banned
Dec 14, 2018
5,990
St Kitts
Very interesting.

That blu Ray disc drive is one of the reasons I would like to see a discless version with a bigger HDD as an option.

Also, if we're increasing the price to $500 *and* saying that Sony/MS are willing to take a reasonable loss on on the hardware (say $50) that would give them an extra $150 to spend on APU and RAM...

How much RAM could they put in for $150-175?
well if they are taking alossof$50 on a $500 console that actually gives them $200 more (approximately) to spend on everything else even if taking my most expensive base estimates. In that case sony could double the amount of DDR4 while keeping the HBM2 allotment the same but I don't see the need. They would sooner spend more on a better cooling system to be able to clock their APU higher than spend more on RAM that they wouldn't even need.

How is 8GB HBM2 + 16GB of DDR4 priced at 100$? HBM2 is 160$ per 8GB for AMD level OEM volume purchase, even if Sony is able to get some amazing deal and pay 50% less than AMD it's still 90$ for 8GB of HBM2 and then, on top of that, you need to add 16GB of DDR4 which retails at around 90$ today. So even if Sony gets 50% from AMD's price on HBM2 and 50% off the retail price of DDR4 than you get ~135$ and that's before taking into account the extra memory controller.
This is pretty simple actually. First off as of 2017 HBM RAM was estimated to cost around $160 for 8GB worth. I dare say that it doen't cost that much today much less next year. And if we are to believe the rest of the leak that actually birthed this whole HBM2 thing, sny are geting a mix of "rejects" and normal chips hence the "amazing deal". So I think it is not one safe to say that HBM2 costs at least $20 less today than it did in 2017, but its also safe to say that that amazing deals Sony is supposedly getting means it drops from like $140 to like $80 using your 50% ower thing. And this isn't factoring in the volume of order sony will be making.

And I don't know how you arrive at 16GB o DDr4 cause I never said 16GB of DDR4. I said 8GB. But just keep in mind that it will probably cost sony no more than $32 for 8GB of DDR4if going by whats currently on DRAM exchange right now. As far as I am concerned, if they are using HBM then its being paired with 8GB of DDR4 not 16GB of DDR4. And memory controllers costs go to the APU. So that falls into the whole what can you fit in a 350mm2 chip. It doesn't matter how many things you throw into a chip, what determines a chis price is its size and yield.
 

Putty

Double Eleven
Verified
Oct 27, 2017
929
Middlesbrough
Here's a thing, the more wildly speculative spec rumours appear, and ultimately get tore down for X reasons....people then take that teardown info then try to come up with something that then seems plausible on paper, then post on wherever "my dad is Mark Cerny's gardiner, and he saw the PS5 specs next to his lawnmower".

It's also pretty clear supporters of each company are and will attempt to downplay "the potential other box". In all honesty...ANYONE could end up with the most powerful...
 

Pheonix

Banned
Dec 14, 2018
5,990
St Kitts
So if we go for 35% we talking something about 182$ just for the memory ..then you have to add the controllers ....this for a result that add complexity and less performance (bandwidth) than gddr6....

so the mixed hbm2 + ddr4 results are...

CONS:
Complex hw design
Memory performance
Cost

PRO
Future cost saving*
TDP advantage

If at all we will see hbm on the mid gen refresh..
I don't know how you guys are doing your HBM math lol.....

if the rumor says HBM costs 35% more than GDDR6 hell lets even make it 40%. So we are looking at 40% more than $55 for 8GB of GDDR6. And this is a worst case scenario. That means that 8GB of HBM2 RAMfor sony costs around $70.

And this controller math.... no no no and no. The controllers are an added cost to the APU cause that's where they are. And it cost is measured in space taken in the APU, which basically means you either get a bigger or smaller chip and the bigger the chip you get the more expensive it is. We have already worked out how much can be fit into a 350mm2 APU in this thread..... and we also have info that a 7m chip that size will cost them around $150 to make.
 

chris 1515

Member
Oct 27, 2017
7,074
Barcelona Spain
The discounted price are from the article not
Here's a thing, the more wildly speculative spec rumours appear, and ultimately get tore down for X reasons....people then take that teardown info then try to come up with something that then seems plausible on paper, then post on wherever "my dad is Mark Cerny's gardiner, and he saw the PS5 specs next to his lawnmower".

It's also pretty clear supporters of each company are and will attempt to downplay "the potential other box". In all honesty...ANYONE could end up with the most powerful...

I did all this search because the rumor sound plausible. I maybe do this in vain. Because the rumor is false.

Another things giving credibillity to the rumor. If 8 Gb of RAM HBM2 was the spot price was 160 dollars apply 40% of discount would mean 96 dollars for 8 Gb of HBM2 not 77 dollars but not twice the price.
 

chris 1515

Member
Oct 27, 2017
7,074
Barcelona Spain
I don't know how you guys are doing your HBM math lol.....

if the rumor says HBM costs 35% more than GDDR6 hell lets even make it 40%. So we are looking at 40% more than $55 for 8GB of GDDR6. And this is a worst case scenario. That means that 8GB of HBM2 RAMfor sony costs around $70.

And this controller math.... no no no and no. The controllers are an added cost to the APU cause that's where they are. And it cost is measured in space taken in the APU, which basically means you either get a bigger or smaller chip and the bigger the chip you get the more expensive it is. We have already worked out how much can be fit into a 350mm2 APU in this thread..... and we also have info that a 7m chip that size will cost them around $150 to make.

We even have a consumption for HBM2 controller in the link I gave. I gave what I call a realistic range for HBM2 + DDR4 because I don't have the DDR4 controller consumption(giving a max of 10 watts like HBM2 controller) and only DDR 2800 consumption not exactly 3200 DDR4 consumption for example.
 

Deleted member 1589

User requested account closure
Banned
Oct 25, 2017
8,576
The discounted price are from the article not


I did all this search because the rumor sound plausible. I maybe do this in vain. Because the rumor is false.

Another things giving credibillity to the rumor. If 8 Gb of RAM HBM2 was the spot price was 160 dollars apply 40% of discount would mean 96 dollars for 8 Gb of HBM2 not 77 dollars but not twice the price.
The rumor has also survived having questioned multiple times. Cost aside (which remains the biggest question mark) it remains one of the most impressive leaks I've seen.

We can argue if 26GB GDDR6 would be better for a console though. Just because we are fascinated by this rumor doesn't mean that it would be the far superior to that.
 

Deleted member 12635

User requested account closure
Banned
Oct 27, 2017
6,198
Germany
Here's a thing, the more wildly speculative spec rumours appear, and ultimately get tore down for X reasons....people then take that teardown info then try to come up with something that then seems plausible on paper, then post on wherever "my dad is Mark Cerny's gardiner, and he saw the PS5 specs next to his lawnmower".

It's also pretty clear supporters of each company are and will attempt to downplay "the potential other box". In all honesty...ANYONE could end up with the most powerful...
It is time you do your own rumor:
"Hey all, D11 here. We have no dev kits but I am really well connected within the industry. Sources tell me to calm down about the specs. Next gen will be fabulous anyway. KTHXBAI"
 

chris 1515

Member
Oct 27, 2017
7,074
Barcelona Spain
The rumor has also survived having questioned multiple times. Cost aside (which remains the biggest question mark) it remains one of the most impressive leaks I've seen.

We can argue if 26GB GDDR6 would be better for a console though. Just because we are fascinated by this rumor doesn't mean that it would be the far superior to that.

If it is only a rumor, I think Sony and MS need to think about if for midgen refresh using HBM3 like this no problem for bandwitch...

EDIT: And for DDR4 this is not consumption estimastion but measurement of DDR4 consumption. 16GB of DDR4 consume nearly twice of 8Gb of DDR4.
 

BreakAtmo

Member
Nov 12, 2017
12,815
Australia
It has a practical die size limit. InFO is best suited to die under 400 mm^2 from the image I shared earlier. Of course, that will change over time.

Ah, I see. Well, at least that's good for consoles. Maybe it could even be used in the Switch 2, if it uses Wide IO or some kind of LPHBM. It would be great to see a Switch 2 with RAM that matches or exceeds the PS4's.
 

tusharngf

Member
Oct 29, 2017
2,288
Lordran
https://www.reddit.com/r/PS5/comments/bq7nlr/next_gen_consoles_specs/

Another leak! It was about time, the other ond was almost a couple of hours old.

I like the wording :"PS5 is a very nice and powerful machine.... "


PS5 is a very nice and powerful machine, it shares the same design philosophy as the PS4. The GPU is 11.1tflops at 1550mhz It has a single pool of 16gb gddr6 with a separate 4gb of ddr4 allocated to the OS. The rest of the specs are known.


The next gen Xbox's are very similar in design, the higher end model has a very Similar GPU to PS5 but it's more powerful with a 1655mhz clock which produces 12.7tflops and 24gb of gddr6 with 4gb for the OS. the lower end model is basicly half the GPU and 16gb gddr6.


11tf is not that bad even if its a rumor.
 

Screen Looker

Member
Nov 17, 2018
1,963
Here's a thing, the more wildly speculative spec rumours appear, and ultimately get tore down for X reasons....people then take that teardown info then try to come up with something that then seems plausible on paper, then post on wherever "my dad is Mark Cerny's gardiner, and he saw the PS5 specs next to his lawnmower".

It's also pretty clear supporters of each company are and will attempt to downplay "the potential other box". In all honesty...ANYONE could end up with the most powerful...

Looks like nothing has changed in this thread in the last couple weeks then, eh?

I'll check back here again After E3 and see how the tables have exploded lol
 

DrKeo

Banned
Mar 3, 2019
2,600
Israel
Read the article I link. The 40% less is for an OEM buyer like AMD, it is between 20% and 40% of discount GDDR6 compared to spot price depending of the quantites. For Sony and MS I took the maximum because of the quantites they will use. Everything I talk about have a credible source. There is no guess on my part. Everything looks plausible. And for consumption scaling nearly lineraly with quantity it is at least true for DDR4 maybe GDDR6 or HBM2 follow different rules of physics but I doubt it. The few link and anexanhume link and DRAMEXchange spot price are enough to do an estimate of price and consumption, you can do your own one. I did the same discount price for DDR4 than in guru3d article no reason to not have discount price on DDR4 RAM too.

https://www.guru3d.com/news-story/gddr6-significantly-more-expensive-than-gddr5,3.html



EDIT: The price they give is not the price for AMD maybe AMD will only have a discount of 20% and Sony a discount of 40%...



Using simple math at 40% discount 16Gb of GDDR6 cost 110 dollars, 8 Gb 55 dollars for 24 GB GDDR6 this is 165 dollars...

For HBM2 is 40% more than 8Gb GDDR6 is 77 dollars pretty simple and for 16 GB DDR4 I took the spot price on DRAM exchange and applie a discount of 40%. No fancy maths...
And again, that's a combination of rumors and estimates. The 40% discount is an estimate, the 35%-40% more than a GDDR6 is a rumor and the GDDR6 prices are again an estimate. When you base accurate calculations on a bunch of estimates stacked up one another, the result could wildly differ from reality. If every estimate is off by just 10%, when you stack 3 of them you are already off by 33% which in this case is 37$.

The root of the problem is that original HMB2 rumor because:
1) Everything in that rumor could be applied to GDDR6 too. Buying GDDR6 in large volumes will drive its' price down too and buying lower quality chips and clocking them lower will make them cheaper too.
2) info_ms was said to be in "in the early stages of R&D " in January 2019. The PS5 was architecture locked long before that so I doubt that their whole memory setup is reliant on something that was just a few months ago "in the early stages of R&D". Although info_ms will find its' way to products by the end of 2019, I'm not sure that a machine that is being designed years in advanced could rely on it.
3) Sony gets 52% cheaper than what AMD is getting? sounds crazy to me. It should sound crazy to you too when you are saying that AMD gets a 20% discount and Sony a 40% discount while 52% discount compared to AMD results in a total of 61% discount.

Another things giving credibillity to the rumor. If 8 Gb of RAM HBM2 was the spot price was 160 dollars apply 40% of discount would mean 96 dollars for 8 Gb of HBM2 not 77 dollars but not twice the price.
160$ is the price that AMD gets, so it's already discounted so you can't apply 40% on top of that. If AMD gets 20% (your guess), then it's 200$ per 8GB of HBM2 before discount which is 120$ after 40% discount that Sony gets. Then you have to add in the 16GB of DDR4 and its' extra memory controller and you are well over 150$ if not 160$.
 
Last edited:

Putty

Double Eleven
Verified
Oct 27, 2017
929
Middlesbrough
Remember guys i'm speculating like everyone else, and as i've kept on saying i know absolutely nothing....nothing...but I DO know that my expectations are always kept in check because if somethings to good to be true, then....well you know the rest.
 

Locuza

Member
Mar 6, 2018
380
I skimmed for over 3 hours from page 74 to 265.
And that was already quite a ride and experience.

Well the changes to the graphics and compute array to support RPM was the most significant architectural change Vega brought.

And tbh, even Vega didn't launch with the "complete" package of features it was announced with... so by this logic no current Vega chip is actually Vega.
I would argue that the DSBR, ROPs as L2$-clients and HBCC were much more significant changes.

Actually, XB1's design was vanilla GCN2, before the arrangement of the GCA (graphics and compute array) into 4 Shader Engines.

Each SE has 2 ACEs, so going less than 4 SEs means fewer ACEs and thus lower overall asynchronous compute performance... I doubt they'd want that.
So I'm pretty sure the CU count (i.e. active plus deactivated) needs to be divisible by four for GCN4 and above.
Edit:
I misremembered. GCN2 introduced Shader Engines. But I think AMD does still maintain a 2x ACEs per SE ratio.
SE isn't just a name. It's an architectural arrangement. You're missing the point here. GCN2 didn't "add more SEs", it introduced the arrangement of SEs which wasn't present in GCN1.
It is just semantics in regards to the SEs.
AMD didn't changed the internal hardware scaling with GCN2, it was like that since GCN1.
The kernel driver for the software stack under Linux uses the same nomenclature and parameters for every GCN chip.

case CHIP_TAHITI:
adev->gfx.config.max_shader_engines = 2;
adev->gfx.config.max_tile_pipes = 12;
adev->gfx.config.max_cu_per_sh = 8;
adev->gfx.config.max_sh_per_se = 2;
adev->gfx.config.max_backends_per_se = 4;
adev->gfx.config.max_texture_channel_caches = 12;
adev->gfx.config.max_gprs = 256;
adev->gfx.config.max_gs_threads = 32;
adev->gfx.config.max_hw_contexts = 8;
https://github.com/RadeonOpenComput.../master/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c

In addition "ACEs" don't scale with the number of SEs.
According to bridgman and the Linux drivers AMD actually counts and scales MECs (Micro Engine Compute).
Since GCN2 there is one block with 4 compute pipes (ACEs).

Kabini (low-cost APU with 128 GCN2 CUs) had 1 MEC and supported as such 32 compute queues.
It was just one Shader Engine.

Kaveri (Mainstream APU with 512 GCN2 CUs) has 2 MECs, support for 64 compute queues and also just one Shader Engine.

I remember a year and more ago, people were thinking PS5 and the next Xbox consoles would use Vega GPUs. Of course, that was absurd, especially when you realize that Vega was originally due around mid 2016 (then early 2017) but didn't reach Radeon gaming cards until Autumn 2017. Vega was very, very late. This GPU had been anticipated as far back as early 2015, as it was known as Greenland, AMD's next flagship GPU of Arctic Islands, to succeed Fiji. there would be Greenland desktop GPUs and there would be high core count Zen +Geenland stream+HBM APUs, etc.

March 2015 Fudzilla article - AMD Greenland HBM graphics coming next year

April 2015 - AMD x86 16-core Zen APU detailed


uLSLgLy.jpg


Anyway, to keep things simple, Greenland = Vega 10.

The Arctic Islands series was meant to consist of 3 GPUs. Baffin, Ellesmere and Greenland. But things got split up into two GPU families. Arctic Islands was no more. Ellesmere was Polaris 10 and Baffin was Polaris 11 and Greenland was Vega 10.

Imagine consoles launching in Fall 2020 using Vega/Greenland GPUs which would be 3 to 4 year old technology at that point, depending on how you wanna look at Vega's development and delayed release. I've always believed that next gen consoles would use Navi or Next Gen, or a custom blend of both IP.
In the beginning I also thought that Greenland was renamed to Vega 10 like Ellesmere and Baffin to Polaris10 and 11 but that's obviously not what happened.
Greenland was supporting Half-Rate FP64, internal SRAM ECC and GMI for HPC APUs.
AMD cancelled the project and went with Vega10 and Vega20 instead, to adress the markets more specifically.

[...]
The pseudo-DX12 support on Win7 that was introduced with World of Warcraft a little earlier in the year is a bit of a hack to get multithreaded rendering on DX11.
Well it's not DX11.
It should be really the DX12 API and runtime retrofitted to work with the older WDDM model of W7.

PS: IIRC C1/Xenos from the X360 used an architecture branch which ATI developed for quite some time in regards to a Unified Shader Architecture.
But instead of using it around the 2005 time frame they developed the R300/400 base further and served the market with the R500 series till the R600 and derivatives served the PC market with ATI's first unified shader architecture.

The R600 had some crucial differences in comparison to the C1/Xenos.
The R600 was a VLIW5 archictecure where every "slot" could process one data element of it's own if no dependencies were present, what the marketing also liked to call "superscalar" back in the days.

C1/Xenos used Vec4 + 1 scalar ALUs, so less flexible and efficient.
 

Pheonix

Banned
Dec 14, 2018
5,990
St Kitts
The rumor has also survived having questioned multiple times. Cost aside (which remains the biggest question mark) it remains one of the most impressive leaks I've seen.

We can argue if 26GB GDDR6 would be better for a console though. Just because we are fascinated by this rumor doesn't mean that it would be the far superior to that.
The way I see it is that sony going with an HBM+DDr4 combo actually is because it will cost less for them to accomplish that than for them to go with 24GB of GDDR6.

I mean we need to look at the system as a whole. Anyone using 24GB of GDDR6 isn't doing so because games suddenly need upwards of 20GB of ram. That's really just too much. But its because they need the memory bandwidth and hence a 384it bus aka 12 chips of GDDR6. But no matter what that puts the cost of RAM at around $150 on average. Anywhere from $135 all the way up to around $165 depending on deals. If using GDDR6. Now if you opt for 16GB of GGDR6 t means you are on a 256bit bus and that you are also sharing that bandwidth with you CPU... you end up creating a bottleneck for the system but at least having to spend only half the amount on RAM.

So to me, sony going with HBM is probably because that's the only way the can get the most bandwidth out of a 16GB total set up. While still costing less than what 24GB of GDDR6 would cost them. And its things like this that could really mean that MS has the more expensive console or spends more. So we could see one console have 16GB-20GB of total RAM while the other has 24GBof RAM but both using two very different RAM setups.
 

chris 1515

Member
Oct 27, 2017
7,074
Barcelona Spain
And again, that's a combination of rumors and estimates. The 40% discount is an estimate, the 35%-40% more than a GDDR6 is a rumor and the GDDR6 prices are again an estimate. When you base accurate calculations on a bunch of estimates stacked up one another, the result could wildly offset.

The root of the problem is that original HMB2 rumor because:
1) Everything in that rumor could be applied to GDDR6 too. Buying GDDR6 in large volumes will drive its' price down too and buying lower quality chips and clocking them lower will make them cheaper too.
2) info_ms was said to be in "in the early stages of R&D " in January 2019. The PS5 was architecture locked long before that so I doubt that their whole memory setup is reliant on something that was just a few months ago "in the early stages of R&D". Although info_ms will find its' way to products by the end of 2019, I'm not sure that a machine that is being designed years in advanced could rely on it.
3) Sony gets 52% cheaper than what AMD is getting? sounds crazy to me. It should sound crazy to you too when you are saying that AMD gets a 20% discount and Sony a 40% discount while 52% discount compared to AMD results in a total of 61% discount.

1) But the price I gave for GDDR6 are discounted. AMD for 16Gb of GDDR6 would pay between 110 dollars and 150 dollars depending of the discount. I think with bigger quantity bigger discount but maybe AMD pay the same price than Sony or MS 110 dollars for GDDR6. And AMD price for GDDR6 have nothing to do with the rumor.
2) Sony was one of the company consulted for Info_ms process. Like the PS4 was reliant of new chip module for 8Gb of GDDR5.
3) This the rumor. HBM2 is on very fews AMD cards I don't find the discount price for HBM2 shocking at all... If Sony buy HBM2 it will be in huge quantity nothing to do at all with AMD(two order of magnitude more) and it will help to improve fab process, economy of scale for the HBM 2 supplier and they can continue to charge premium customer datacenter the same price but with better profitablity with improve fab process. HBM2 are contract based, datacenter customer will not even know the price paid by Sony... 20% discount if for GDDR6 nothing to do with HBM2.

It seems you don't understand the math the only things coming from the rumor is the HBM2 costing 35% to 40 % more than GDDR6 and I used the worst case 40%. All other things comes from guru3d article and DRAM exchange for spot price of DDR4... Nothing fancy and I did the same calculation two days ago.

EDIT: 52% is not far from 40% if the HBM2 was not sold by contract but sold at spot price with a 40% discount... This is not some incredible rumor with HBM2 cheaper than GDDR6...

EDIT2: HBM2 no spot price, no discount it is contract based.
 
Last edited:

Lukas Taves

Banned
Oct 28, 2017
5,713
Brazil
So, PS5 will be the most powerful - pure fantasies.
Anaconda will be the most powerful - a reality.

Like i said few days ago, that's how it works here.

Pathetic!
We will only know for sure when they are announced yeah, but Ms has gone on record with their desire to have the most powerful console, and if you take in the 2 price strategies it's clear to see how this can go.

No matter how strong PS5 is, if they price it higher they will be able to outperform it... Perhaps not to the degree X outperforms the Pro, (I'm personally expecting a very small advantage on the Gpu side, but more memory and thus more bandwidth, and a slightly higher clock on the cpu) but enough to claim that they have a stronger machine and that all games will play better on it. And the reason they can afford to price it higher and aim for better specs is because they will have a sales leader machine on Lockhart.

Anyway, that's basically Ms strategy, to have the strongest machine no matter the price, and to have the most affordable machine no matter the performance. It makes really hard, if not impossible to have either of those points beaten with a single SKU.
 

modiz

Member
Oct 8, 2018
17,822
well i think it is time to have a new prediction from me:
PS5 - $499
zen 2 8 core 16 threads 3.2Ghz
1TB SSD with very high bandwidth, abillity to add an external hard drive.
Ram - 8GB HBM + 16GB DDR4 allowing lor lower power consumption from the memory. 8GB HBM used completely for the GPU, 4GB DDR4 reserved for OS, 12GB DDR4 freely used for the developers between the CPU and the GPU, also the SSD will help with memory concerns.
GPU - navi 10 lite: 48CU clocked at 1.8 Ghz for 11TF. using the TDP that was saved from the memory setup to increase clocks to a very high level.

xbox lockhart - $349~$399
zen 2 8 core 16 threads 2.8Ghz
1TB SSD (if MS opt to take out the SSD or do a HDD + small SSD solution then maybe they can hit $299)
Ram - 12~16GB GDDR6
GPU - navi GPU 40CU clocked at 1.3Ghz for 6.5TF

xbox anaconda - $499
zen 2 8 core 16 threads 3.2Ghz
1TB SSD
Ram - 24GB GDDR6
GPU - navi GPU 60CU clocked at 1.45~1.5Ghz for 11.1~11.5TF (higher CU with lower clocks, GDDR6 power draw limiting potential clock)

overall with these numbers the anaconda ends up with a slight advantage but maybe i will be wrong.
 

disco_potato

Member
Nov 16, 2017
3,145
So the question is - is it linear. If it is, a 12 chip setup with 12Gb/s GDDR6 chips will draw 50% more power than the Xbox One X's setup (85% of GDDR5 power but increase bitrate by 76%) and result in 576Gb/s.

If these are really the numbers, I don't see a way for having 24GB of unified memory in a console anyway. We are talking about spending ~100W on memory alone which is insane so we are limited to 16GB of GDDR6 or some kind of split memory setup.
100w? When the jedec spec for gddr5x was released, there was a theoretical power consumption of 20w per 8gb on a 256bit bus. I believe Micron said power consumption was 2-2.5w per ram component or 10-30w per board, depending on how many chips are used. Gddr6 is supposed to be more efficient than that, isn't it?
 

chris 1515

Member
Oct 27, 2017
7,074
Barcelona Spain
100w? When the jedec spec for gddr5x was released, there was a theoretical power consumption of 20w per 8gb on a 256bit bus. I believe Micron said power consumption was 2-2.5w per ram component or 10-30w per board, depending on how many chips are used. Gddr6 is supposed to be more efficient than that, isn't it?

GDDR6 is supposed to be 10% more efficient than GDDR5 from JEDEC standard
 

Pheonix

Banned
Dec 14, 2018
5,990
St Kitts
And again, that's a combination of rumors and estimates. The 40% discount is an estimate, the 35%-40% more than a GDDR6 is a rumor and the GDDR6 prices are again an estimate. When you base accurate calculations on a bunch of estimates stacked up one another, the result could wildly differ from reality. If every estimate is off by just 10%, when you stack 3 of them you are already off by 33% which in this case is 37$.

The root of the problem is that original HMB2 rumor because:
1) Everything in that rumor could be applied to GDDR6 too. Buying GDDR6 in large volumes will drive its' price down too and buying lower quality chips and clocking them lower will make them cheaper too.
2) info_ms was said to be in "in the early stages of R&D " in January 2019. The PS5 was architecture locked long before that so I doubt that their whole memory setup is reliant on something that was just a few months ago "in the early stages of R&D". Although info_ms will find its' way to products by the end of 2019, I'm not sure that a machine that is being designed years in advanced could rely on it.
3) Sony gets 52% cheaper than what AMD is getting? sounds crazy to me. It should sound crazy to you too when you are saying that AMD gets a 20% discount and Sony a 40% discount while 52% discount compared to AMD results in a total of 61% discount.


160$ is the price that AMD gets, so it's already discounted so you can't apply 40% on top of that. If AMD gets 20% (your guess), then it's 200$ per 8GB of HBM2 before discount which is 120$ after 40% discount that Sony gets. Then you have to add in the 16GB of DDR4 and its' extra memory controller and you are well over 150$ if not 160$.
You see now you are just ignoring things just to make your own point.

  1. No, that logic can't be applied to GDDR6 because GDDR6 is already a mass volume product. Pretty much every single PC GPU will have a GDDR6 variant and that is also going to be going into at least one of the two next-gen consoles.

  2. If Sony/MS or even both sets out to put HBM in their next-gen console, that literally is the game changer HBM needs. You are talking about ore sales in an order of magnitude that simply doesn't exist now. HBM goes from being in 1M-3M products to being in 20M products per year kinda sales bump. Yes, that will drive HBM costs down and to say otherwise is just disingenuous.

  3. But all that aside, the most important part of the rumor you see to be ignoring is that sony isn't even going fo the best performing HBM2 chips. That's what makes it possible for them to be getting the kinda deal they are rumored to be getting. And that why they can get it for significantly less than hat AMD is getting. You simply can't choose to consider the HBM rumor then ignore the one defining part of it.

  4. And these estimates you talk about... no matter what everything is an estimate. Including what you are saying. So you are using estimates to disprove estimates because you like the sound of your own estimates better?

    Unless you can show otherwise, we can easily look at the "estimates" from Info provided on DRAM exchange and guru3D for GDDR6 pricing. We also have multiple sources that suggest HBM2 RAM cost around $80 for a 4GB stack. So that puts 8GB at $160. Which is just ridiculous as that will cost more than what 24GB of GDDR6 is likely to cost. So the only way any of the whole HBM talks makes sense is if you take the part of the rumor that started it seriously; which said sony is getting a great deal because they are using less than optimal chips and or clocking their chips lower.

    If it's possible that Sony is paying about half what AMD would have paid, based on the volume of orders and the fact they aren't even using stacks as good as what AMD would need, then it starts to make sense. You start ending up with a scenario where 8GB HBM+ 8GB DDR4 will cost Sony less than 24GB of GDDR6 with the sacrifice being that sony will be using less RAM in total. Now if we don't want to consider that possibility, then we might as well discount the whole HBM rumor because there is no other way it makes sense.
 

anexanhume

Member
Oct 25, 2017
12,912
Maryland
2) info_ms was said to be in "in the early stages of R&D " in January 2019. The PS5 was architecture locked long before that so I doubt that their whole memory setup is reliant on something that was just a few months ago "in the early stages of R&D". Although info_ms will find its' way to products by the end of 2019, I'm not sure that a machine that is being designed years in advanced could rely on it.
You conveniently skipped over the part where it says it will be in production this year:

It should be in production this year.

This matches exactly what TSMC said when they unveiled it last year.

https://www.eetimes.com/document.asp?doc_id=1333244&page_number=3

The InFO technique is getting four cousins. Info-MS, for memory substrate, packs an SoC and HBM on a 1x reticle substrate with a 2 x 2-micron redistribution layer and will be qualified in September.

Everything is designed years in advance, so that is an empty comparison. The parts for next gen consoles won't be in HVM for almost an entire year from now.
 
Last edited:

Deleted member 12635

User requested account closure
Banned
Oct 27, 2017
6,198
Germany
I want to add something to the GDDR6 vs HBM2 discussion we also may think about which is unrelated to power consumption, that is:
1) consoles are still mass market consumer products with a life cycle of about 5-7 years
2) I would assume that MS and Sony also think about defects and plan for repairability
3) So a stacked HBM2 on a SOC would require the replace the most expensive part if you run into memory defects because its stacked on the SOC
4) While with GDDR6 you may only have to replace the chip(s) defective on the PCB
5) I am not only talking about defective after usage at home/the store but also defective on the production line
 

Hey Please

Avenger
Oct 31, 2017
22,824
Not America
Remember guys i'm speculating like everyone else, and as i've kept on saying i know absolutely nothing....nothing...but I DO know that my expectations are always kept in check because if somethings to good to be true, then....well you know the rest.

Well, given Stadia's specs are made public and now is the baseline for next gen performance, what kind of advancements can be expected from that kind of spec (going well beyond just the GPU)?

Still quite content with my last prediction for the two big dogs:

  • Anaconda - 11.3 TF - 60 CUs @ 1475 - 499
  • PS5 - 12.9 TF - 56 CUs @ 1800 - 399

I honestly can not fathom this happening, not after how Xbox one OG turned out. Also, another reminder that it was the dev kit that was "rumoured" to have 12.9TF. Devkits, due to their nature, have additional computational resources compared to their retail counter parts.

I will eat my hat if PS5 launch console comes in anywhere north of 12TF.
 

Deleted member 12635

User requested account closure
Banned
Oct 27, 2017
6,198
Germany
Well, given Stadia's specs are made public and now is the baseline for next gen performance, what kind of advancements can be expected from that kind of spec (going well beyond just the GPU)?
Who said Stadia to be the baseline? The only statement I know is Stadia is a target ("aiming"). When I hear that some is "aiming" at something I remember all the cases developers were aiming 60 fps. Surprise surprise the moment we saw the real frame rate. Aiming is way to weak to call it a baseline imo.
 

Fizie

Member
Jan 21, 2018
2,849
Will be fun to look back on this thread in a year or two and see which rumor was most correct
Very fun.

As someone who has very limited knowledge on hardware, I find the thread fascinating. I also love how defensive people are getting when discussing their own theories when the vast majority will be utterly wrong.
 

Hey Please

Avenger
Oct 31, 2017
22,824
Not America
Who said Stadia to be the baseline? The only statement I know is Stadia is a target ("aiming").

Reading Eurogamer:

Google has released the following data for Stadia. It's a curious mixture of data points, combining the kind of minutiae rarely released on some components along with notable omissions elsewhere, such as the amount of cores/threads available for developers on the CPU. Regardless, it paints a picture of a highly capable system, clearly more powerful than both the base and enhanced consoles of the moment.

  • Custom 2.7GHz hyper-threaded x86 CPU with AVX2 SIMD and 9.5MB L2+L3 cache
  • Custom AMD GPU with HBM2 memory and 56 compute units, capable of 10.7 teraflops
  • 16GB of RAM with up to 484GB/s of performance
  • SSD cloud storage
Google says that this hardware can be stacked, that CPU and GPU compute is 'elastic', so multiple instances of this hardware can be used to create more ambitious games. The firm also refers to this configuration as its 'first-gen' system, the idea being that datacentre hardware will evolve over time with no user-side upgrades required. Right now, it's not clear if the 16GB of memory is for the whole system, or for GPU VRAM only. However, the bandwidth confirmed is a 100 per cent match for the HBM2 used on AMD's RX Vega 56 graphics card.

I really don't see it aiming and more like something that is set. And it is to be expected given Stadia does not suffer from the heat and power consumption limitations to the same degree as a home console would.

And as to why I said baseline- Well, given what Jschreier said about MS and Sony both targeting higher specs, and the fact that it is literally the first next gen console to set a standard, I dubbed it the 'baseline'. However, if MS does go for a lower powered SKU, then in all likelihood, THAT will become the new baseline once its specs are publicly known.
 

Hey Please

Avenger
Oct 31, 2017
22,824
Not America
Last time I checked (last article about next gen consoles by R. Leadbetter) it was 8TF up to 14TF iirc

Well the last feature he did was with Phil Harrison and Majd Bakar regarding Stadia. In that interview this query was posed followed by this answer:

But you've got some very smart people who can produce some very good projections.

Phil Harrison: The GPU that is built into our first generation system is more than 10 teraflops of performance and we will scale up from there.

So if you are talking about home consoles, then yes, that's still the range but Stadia has its specs announced publicly.
 

Deleted member 1589

User requested account closure
Banned
Oct 25, 2017
8,576
You conveniently skipped over the part where it says it will be in production this year:



This matches exactly what TSMC said when they unveiled it last year.

https://www.eetimes.com/document.asp?doc_id=1333244&page_number=3



Everything is designed years in advance, so that is an empty comparison. The parts for next gen consoles won't be in HVM for almost an entire year from now.
So when do you think, if this HBM rumor is true, could devs get close to final devkits for the PS5?

Or is it already possible?
 

Hey Please

Avenger
Oct 31, 2017
22,824
Not America
But they strictly talk just Stadia, right? So I don't see where the "baseline" comes from ...

Ah, I see where the misunderstanding is. It is a matter of semantics. It's like a qualification session in F1. The first one to complete a hot lap sets a target, a baseline and then over the next 16 minutes, the other 19 cars try to best it. In the conclusion of the first qualification session, the driver who qualified 20th becomes the new baseline of performance.

Similarly, given Stadia is the only platform whose specs are known instead of being wildly speculated, they are now the baseline for next generation pertaining to power. Everything else pertaining to Sony and MS are pure speculation.

Edit: I really hope you watch F1 :P
 
Status
Not open for further replies.