• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.
  • We have made minor adjustments to how the search bar works on ResetEra. You can read about the changes here.

What do you think could be the memory setup of your preferred console, or one of the new consoles?

  • GDDR6

    Votes: 566 41.0%
  • GDDR6 + DDR4

    Votes: 540 39.2%
  • HBM2

    Votes: 53 3.8%
  • HBM2 + DDR4

    Votes: 220 16.0%

  • Total voters
    1,379
Status
Not open for further replies.

Liabe Brave

Professionally Enhanced
Member
Oct 27, 2017
1,672
Liabe Brave dieing on that hill.
The high crest of Ambo Naitë looks ever to the light.

~320mm for your setup. AMD won't have RT cores that are as big as AMD's.
Your math is off. No matter RT hardware size, 320mm^2 would leave zero room for it, as that whole area is already consumed by 70mm^2 Zen 2 plus the RT-less 251mm^2 5700XT. In fact, it leaves less than zero room, because 5700XT has no CUs disabled for yield. Getting to 40 active CUs on a console chip will require even more space.
 

Gay Bowser

Member
Oct 30, 2017
17,754
And no, i meant an extra $100 in comparison to the ps4. Ps4 apu was 350 mm2. Was able to fit in a $399 console with no extra cooling. Everyone here assumed that Sony would stick with 350mm2 or even smaller pro size die despite it becoming clear that we would not be seeing a $399 ps5.

I'm sorry, when did this "become clear"?
 
Dec 31, 2017
1,430
i meant it as a figure of speech, they obviously don't have access to the Scarlett apu but I'm sure they had an idea what MS would put in their $499 console seeing as how they are both getting their APUs from the same manufacturer based on the same Navi and Zen 2 products. All they have to do ask amd what they can do with a more expensive apu.

And no, i meant an extra $100 in comparison to the ps4. Ps4 apu was 350 mm2. Was able to fit in a $399 console with no extra cooling. Everyone here assumed that Sony would stick with 350mm2 or even smaller pro size die despite it becoming clear that we would not be seeing a $399 ps5.

So now Sony has an extra $100. They Can spend it on extra cooling and a larger apu. The ps4 apu was only $100. I am being generous with the $150 figure.
But wouldn't both MS and Sony ask what they can do with a more expensive APU? Although I guess MS was already probably paying this much for the X when it released. But I get what you say now, I thought at first their was a price delta in your explanation.

I think MS has good experience with cooling though and seems to have had much better cooling solutions than Sony (both Xbox and surface experience) so I could see a scenario where Scarlet is 10.7Tflops, PS5 is like 11.2Tflops or something close, but the CPU would be clocked higher on the MS console just like it was on the Xbox One and the One X. A 500GFlops difference wouldn't really be noticeable on third party games, and they'd both be the same price most likely, with MS bundling in a free month of Ultimate in their console at launch to sweeten the deal.

I still believe though that MS could still surprise us and come out on top in the end, with a year and a half left, they could make something happen.

Maybe Sony will come out at PSX or early next year and announce specs, and by then MS might be able to push the boundaries of what their console can do with some extra clock speeds and cooling to counter if that is really what is going on right now and PS5 is stronger. Or maybe we'll end up with another RROD lol Maybe the dev kits are misleading and more power is on the way. Can't wait to find out!
 

chris 1515

Member
Oct 27, 2017
7,075
Barcelona Spain
Subor-Z 2 (Electric Boogaloo). I don't believe this, but it's a possible explanation, right?


You're still missing a little area. To put 5700XT into the Anaconda, you'd add not just the Zen 2 but also another memory controller. That gets you to 334mm^2. Then there's the RT hardware--we don't know exactly how much room it'd take, but Nvidia's RT hardware seems to add about 7-8% in size. So that'd be another 10mm^2 (using the lower percentage). We're up to 344mm^2 to put 40 (physical) CUs into the chip.

So yes, there's room for a bigger GPU in there...but not a whole lot bigger. Adding another SE to get to 54 active CUs (60 physical) would bring us to 422mm^2. That's very likely larger than the Anaconda chip we've been shown. And it would certainly require a more robust cooling system than even the One X.

It seems Subor is out of business.

But why an APU? Maybe Sony use a chiplet architecture.
 

eathdemon

Banned
Oct 27, 2017
9,690
this maybe only matter for pc players, but even with that 25% boost they get with navi, its a good reminder that the gap between nvidia and amd is still rather bad. also, this doesnt even include raytracing.
 
Oct 27, 2017
4,657
Where did the ps4 super slim rumor come from
I was trying to find it for you but its original throwaway account post has been deleted. Fortunately anex saved it:

PS4 refresh
  • sometime between september and november
  • 199
  • fabbed on samsung 7nm EUV
  • best wafer pricing in the industry
  • die size 110mm²
  • no PRO refresh, financially not viable yet
  • too close to PS5 as well
PS5 memory and storage systems
  • 24 GB RAM in total (20 GB usable by games)
  • 8 GB in form of 2 * 4-Hi stacks HBM2
  • Sony got "amazing" deal for HBM
  • in part due to them buying up bad chips from other customers which can't run higher then 1.6 Gbps while keeping 1.2v.
  • HBM is expected to scale down in price a lot more than GDDR6 over the console lifetime
  • Samsung, Micron and SK Hynix already shifting part of their capacity towards HBM due to falling NAND prices
  • Sony will be one of the first high volume customers of TSMCs InFO_MS when mass production starts later this year (normal InFo already used by Apple in their iPhone)
  • InFO_MS brings down the cost compared to traditional silicon interposers - has thermal and performance advantage as well
  • InFO_MS allows them to drive their 1.6 Gbps chips @ 1.7 Gbps (435 GB/sec.) without having to increase the voltage above 1.2v
  • HBM is more power efficient compared to GDDR6 - the savings were invested into more GPU power
  • additional 16 GB in form of DDR4 @ 256 bit for 102.4 GB/sec.
  • 4 GB reserved for OS, the remaining 12 GB usable by games
  • memory automatically managed by HBCC and appears as 20 GB to the developers
  • HBCC manages streaming of game data from storage as well
  • developers can use the API to take control if they choose and manage the memory and storage streaming themselves
  • memory solution alleviates problems found in PS4
  • namely that CPU bandwidth reduces GPU bandwidth disproportionately
  • 2 stacks of HBM have 512 banks (more banks = fewer conflicts and higher utilization)
  • GDDR6 better than GDDR5 and GDDR5x in that regard but still less banks than HBM
  • at the same time trying to keep CPU memory access to slower DDR4
  • very satisfied with decision to use two kinds of memory for price to performance reasons
  • allowed them to go below ~50 GFLOPs per GB/sec. bandwidth but still keep above 40 GFLOPs per GB/sec.
 

modiz

Member
Oct 8, 2018
17,907
I was trying to find it for you but its original throwaway account post has been deleted. Fortunately anex saved it:

PS4 refresh
  • sometime between september and november
  • 199
  • fabbed on samsung 7nm EUV
  • best wafer pricing in the industry
  • die size 110mm²
  • no PRO refresh, financially not viable yet
  • too close to PS5 as well
PS5 memory and storage systems
  • 24 GB RAM in total (20 GB usable by games)
  • 8 GB in form of 2 * 4-Hi stacks HBM2
  • Sony got "amazing" deal for HBM
  • in part due to them buying up bad chips from other customers which can't run higher then 1.6 Gbps while keeping 1.2v.
  • HBM is expected to scale down in price a lot more than GDDR6 over the console lifetime
  • Samsung, Micron and SK Hynix already shifting part of their capacity towards HBM due to falling NAND prices
  • Sony will be one of the first high volume customers of TSMCs InFO_MS when mass production starts later this year (normal InFo already used by Apple in their iPhone)
  • InFO_MS brings down the cost compared to traditional silicon interposers - has thermal and performance advantage as well
  • InFO_MS allows them to drive their 1.6 Gbps chips @ 1.7 Gbps (435 GB/sec.) without having to increase the voltage above 1.2v
  • HBM is more power efficient compared to GDDR6 - the savings were invested into more GPU power
  • additional 16 GB in form of DDR4 @ 256 bit for 102.4 GB/sec.
  • 4 GB reserved for OS, the remaining 12 GB usable by games
  • memory automatically managed by HBCC and appears as 20 GB to the developers
  • HBCC manages streaming of game data from storage as well
  • developers can use the API to take control if they choose and manage the memory and storage streaming themselves
  • memory solution alleviates problems found in PS4
  • namely that CPU bandwidth reduces GPU bandwidth disproportionately
  • 2 stacks of HBM have 512 banks (more banks = fewer conflicts and higher utilization)
  • GDDR6 better than GDDR5 and GDDR5x in that regard but still less banks than HBM
  • at the same time trying to keep CPU memory access to slower DDR4
  • very satisfied with decision to use two kinds of memory for price to performance reasons
  • allowed them to go below ~50 GFLOPs per GB/sec. bandwidth but still keep above 40 GFLOPs per GB/sec.
i have been trying to think about this leak again, and there was something that always felt off for me and i wasnt sure what... now i think i know:
in part due to them buying up bad chips from other customers which can't run higher then 1.6 Gbps while keeping 1.2v
no way that they are going to use bad chips as their plan, how much of those can they even get?
 

Adookah

Member
Nov 1, 2017
5,746
Sarajevo
I BELIEVE - 12.88 Tflops - PS5

PS1
PS2
PS3
PS4
PSP
PS Vita

So the PS5 is the seventh PlayStation console. 1.84 TF is the power of the OG PS4.

1.84 x 7 = 12.88

It all makes sense now.

Joke post
 
Oct 27, 2017
4,657
i have been trying to think about this leak again, and there was something that always felt off for me and i wasnt sure what... now i think i know:

no way that they are going to use bad chips as their plan, how much of those can they even get?
That part was always interesting to me as well.

My take on it wasn't that they would only use those chips, but that their plans wouldn't be as dependant on the chips being above that as other vendors might need. They can take chips that are a below the threshold for other vendors' use cases, and they can include those "rejects" with their other chips for a collective purchasing discount.

Edit for clarity changes & to add: It's just like how we see binned processors, CUs in GPUs, etc etc. The parts are still perfectly usable, just not at the peak perf that would make them attractive to a different segment of the market.
 
Last edited:

chris 1515

Member
Oct 27, 2017
7,075
Barcelona Spain
That part was always interesting to me as well.

My take on it wasn't that they were only going to use those chips, probably more that their plans aren't as dependant on the chips being above that. They can take chips that are a lower threshold than other vendors might need and so they can include those "rejects" with their other chips for a collective purchasing discount.

Yes only part of the chip are bad chips not all the chips.
 

VX1

Member
Oct 28, 2017
7,005
Europe
i have been trying to think about this leak again, and there was something that always felt off for me and i wasnt sure what... now i think i know:

no way that they are going to use bad chips as their plan, how much of those can they even get?

I wonder if planned "2019 PS5" had this HBM2+DDR4 memory combo and this changed ,among other things,with 1 year delay to 2020.
 

RevengeTaken

Banned
Aug 12, 2018
1,711
What is semi raytracing? Either there are rays traced or there are not.
And why would dedicate CUs to it in hardware? You can dedicate an arbitrary number of CUs to raytracing right now, on any GPU.
raytracing only for reflection and some sort of light shaft i mean.
yes you can, but you can also dedicate certain number of CUs on die only for raytracing to forces developers only use them in order to implement raytracing in their games. don;t expect anything like Nvidia's RT cores on a 320-350mm2 die size.
 

lightchris

Member
Oct 27, 2017
684
Germany
raytracing only for reflection and some sort of light shaft i mean.

Ok, but this is true on Nvidia's side too. Only some select lighting effects can be raytraced at acceptable performance, most parts are traditionally rendered and rasterized.

yes you can, but you can also dedicate certain number of CUs on die only for raytracing to forces developers only use them in order to implement raytracing in their games. don;t expect anything like Nvidia's RT cores on a 320-350mm2 die size.

That doesn't make a lot of sense in my opinion (and as far as I'm aware something like that has never been done before). Devs should be able to make the best decisions for their game, which in turn is also best for Sony.
 

gofreak

Member
Oct 26, 2017
7,819
raytracing only for reflection and some sort of light shaft i mean.
yes you can, but you can also dedicate certain number of CUs on die only for raytracing to forces developers only use them in order to implement raytracing in their games.

I can say with a fair degree of confidence that's not going to happen.

I don't know if there'll be something analogous to 'RT cores' or not in AMD's approach but there is a spectrum of things they could implement, from the very light through to heavier expenditure of silicon, targeting RT, that would technically qualify as 'hardware acceleration'.
 

Deleted member 12635

User requested account closure
Banned
Oct 27, 2017
6,198
Germany
You do not seem to understand what I am saying. I am not saying that clocks and flops are the same thing. What AMD refers to as performance per clock is directly proportional to performance per flop when comparing RDNA and GCN cards. Therefore, comparing the performance per flop or per clock of one RDNA or GCN GPU to another as a ratio will produce an identical result.
Oh I understood it, it was just wrong and I wanted to explain you the basic concepts why. Your original statement would only be true if for each clock all stream processors would be utilized to 100% for all clock cycles. But this is something that will never happen and In result the statement you made will never be true. There are even two slides in the AMD slide deck that shows you that. Those are the slides 14 and especially slide 15. They call it "effective throughput" while I called it "efficiency".

AMD E3 Event Slides
 
Last edited:

BitsandBytes

Member
Dec 16, 2017
4,576
PCI device 1022:13e5
Ariel Internal PCIe GPP Bridge 0 to Bus A
Advanced Micro Devices, Inc. [AMD]

PCI device 1022:13e6
Ariel Internal PCIe GPP Bridge 0 to Bus B
Advanced Micro Devices, Inc. [AMD]


HBM2 + DDR4?
or
GDDR6 + DDR4?
or
Onion + Garlic 2.0? 🤔

I have no idea what these bridges are but the Subor Z+ has the same bridges:

15fd FireFlight Internal PCIe GPP Bridge 0 to Bus A
15fe FireFlight Internal PCIe GPP Bridge 0 to Bus B

So not likely HBM related.

The Subor Z+ actually has some interesting stats as something physically similar to what the next-gen consoles APUs will be:

4c-8t Ryzen @3GHz, 24CU Vega @1.3GHz, 8GB GDDR5, 3.99TF
14nm GF/Samsung process and ~390-400MM^2 die size
183W at wall (DF test)
 

anexanhume

Member
Oct 25, 2017
12,918
Maryland
That leak you guys are all clinging to that even Richard from DF doesn't think is representative of the clocks we'll see in consoles?

None of us in this thread thought we'd see clocks that high until the Gonzalo leak. These parts have an impeccable track record of ending up in consoles. And frankly, there's several people in this thread just as knowledgeable on this subject as Richard, if not moreso.

the other issue is the xt has a 220 tdp, not useable for a apu. the 5700 is, but thats 8tfps. even if you remove board power, my gues is still around 200tdp for the xt.

It's less than 200W due to GDDR6 needing more power. And you can cool that in a console. They cool that on GPUs after all.
 

Pheonix

Banned
Dec 14, 2018
5,990
St Kitts
i have been trying to think about this leak again, and there was something that always felt off for me and i wasnt sure what... now i think i know:

no way that they are going to use bad chips as their plan, how much of those can they even get?
That's one of the few leaks I actually believe. I am not all in on his reasoning, but I have got a feeling he's onto something.

Sony used GDDR5 this gen, I can see them trying to push expectations on RAM again and go with HBM2. Especially if anticipating that pretty soon every GPU will shift to HBM.

Ah well, all we have to do is wait for a PS4 slim to show up and see Samsung fabbed its APU.
 
Oct 26, 2017
6,151
United Kingdom
I'm just a layman not a dev.

But I think every answer from today's standpoint will be vague even if you are asking a dev.
I expect rather limited integration of ray tracing and see the current Geforce RTX hardware as a usable reference point.


There is no AVX512 support in Zen 2.
AMD kept it simple and made an upgrade from 128-Bit pipes to 256-Bit.
They use relatively lightweight cores and more of them.
For multithreaded applications you still get good vector performance.



RDNA has multiple working modes on different levels.

Wave32 or Wave64 is used depending on the workload.
Depending on the circumstances Wave32 or Wave64 is more efficient.


On a higher level a RDNA Compute Unit can still be compared to one GCN Compute Unit because under RDNA a Compute Unit can still work indepentendly.
Every Compute Unit can work on a Workgroup, which is also called CU mode, where the registers and caches are all exclusive per Compute Unit.

But it's also possible to have two Compute Units working on a Workgroup together and sharing ressources, this is called WGP (Work Group Processor) mode.

Fantastic, thanks for the responses, Locuza. Although, there's no way you can be considered a layman, mate. If you're a layman, I don't even know what to call the rest of us in here.

So after we've had soft console reveals and Navi disclosure, here's my working theory on Navi.

Navi took so long to come out because AMD was actively engaged with Sony and MS engineers simultaneously giving their design feedback and demanding GCN ISA backwards compatibility. This meant that AMD was having to constantly disclose architecture details with Sony and MS and share technology to keep them apprised of its progress. in the case of MS at least, they were likely directly sharing RTL code. Along the way, MS and Sony were giving feedback and also adding feature requests. AMD constantly had to balance this feedback and decide what could make it into Navi and what would have to wait until RDNA 2.0. On top of that, they had to get Navi to a point where it could ship in a reasonable timeframe but be easily modifiable to accommodate further feature requests. All the while, Zen engineers have been deployed to RTG to try and right the ship on power consumption on things learned from the Zen processors. This all created an extreme case of too many cooks.

Still, this is a net benefit for AMD because Sony and MS are constantly profiling these architecture features against test code on their end, giving AMD valuable architecture feedback for free, essentially. This ends up benefiting PC games too, which is just fine with MS given their new almost platform agnostic strategy.

I think this net benefit mentality is exemplified by the new deal with Samsung. Samsung essentially has carte blanche to play with RDNA as they see fit for mobile products, provided they don't release products that directly compete with AMD. In return, AMD gets a team of engineers using their architecture to achieve very low power designs, which they can hopefully integrate back into the RDNA family. This is also a mutually beneficial relationship for Samsung, because they're taking their designs off to their own fabs, and so the AMD architecture can be tailored and optimized to Samsung's nodes, which gives incentive to AMD to use Samsung as a foundry partner and break TSMC's grip on the industry.

Very reasonable speculation.

I also wonder based on the old roadmaps whether AMD started out designing Navi based on GCN with the intent of scaling beyond 64CUs, but after Vega realised the need for more fundamental changes to the uarch in order to achieve that goal; thus they started developing what would later be decided on as a new uarch and become RDNA.

That has never been the case in any console ever :)

What has? A 350mmsq console APU die?
 

Deleted member 12635

User requested account closure
Banned
Oct 27, 2017
6,198
Germany
lol haha.

i will wear the 12 tflops shame badge too. YOLO!
Your wish is my order. Made a personalized version for you!

h5Ecr7q.png
 

BreakAtmo

Member
Nov 12, 2017
12,963
Australia
i have been trying to think about this leak again, and there was something that always felt off for me and i wasnt sure what... now i think i know:

no way that they are going to use bad chips as their plan, how much of those can they even get?

It explicitly says "in part", I'm sure they'd also be getting plenty of normal HBM2 chips.

And then maybe switching to DDR5 and HBM3 for the first slim revision?

Your wish is my order. Made a personalized version for you!

h5Ecr7q.png

Punished AegonSnake

A Poster Denied His Flops
 
Last edited:

bcatwilly

Member
Oct 27, 2017
2,483
Yes, both companies are using the same general AMD infrastructure for next generation consoles. But the signs are right there in the public arena in my opinion that Microsoft has gone in very deeply with AMD on co-design and co-engineering on the Scarlett SOC, and I can't wait to hear more about what they have actually done with the silicon. Jack Huynh is the Corporate Vice President & General Manager of the Semi-Custom Group at AMD since March 2014. His https://www.linkedin.com/in/jackhuynh profile says the following:

Lead AMD's overall Microsoft engagement of the entire AMD product portfolio, including game consoles (Xbox), graphics, client and server solutions.

Note that he was the one that posted the Project Scarlett blog entry on the AMD site - https://community.amd.com/community/gaming/blog/2019/06/09/amd-powers-microsoft-project-scarlett which includes the following wording:

Today we are honored to announce the latest amazing chapter in our long-term custom development partnership with Microsoft focused on pushing the limits of gaming.

AMD and Microsoft have co-designed and co-engineered a custom, high performance AMD SoC to power Project Scarlett to deliver an incredible gaming experience, including the next-generation of performance, graphics, lighting, visuals, and audio immersion. This processor builds upon the significant innovation of the AMD Ryzen™ "Zen 2" CPU core and a "Navi" GPU based on next-generation Radeon™ RDNA gaming architecture including hardware-accelerated raytracing.

Our relationship with Microsoft is based on silicon design deeply rooted in tight collaboration. Our engineering organizations work together as one design team to continually innovate and significantly advance the overall gaming experience.

The bottom line is that AMD posted no such gaming blog entry after the PS5 Wired article or since, and they didn't post any such blog entry back when the Xbox One X was first announced or released either. Logically there does seem to be some serious custom work together here that both are very proud of, and the only way to ignore this completely is to somehow assume that Sony just didn't have their PR department work hard enough to get some additional things put out there by AMD on the PS5 work (that PR person should probably be let go then I guess). These public facts are far more relevant at this stage than just rumors of any kind on either side.
 

anexanhume

Member
Oct 25, 2017
12,918
Maryland
Very reasonable speculation.

I also wonder based on the old roadmaps whether AMD started out designing Navi based on GCN with the intent of scaling beyond 64CUs, but after Vega realised the need for more fundamental changes to the uarch in order to achieve that goal; thus they started developing what would later be decided on as a new uarch and become RDNA.
Or the interceding design was cancelled.

My existing spec guess is 10-12TF GCN. What would that be in RDNA?

10-12TF.

Yes, both companies are using the same general AMD infrastructure for next generation consoles. But the signs are right there in the public arena in my opinion that Microsoft has gone in very deeply with AMD on co-design and co-engineering on the Scarlett SOC, and I can't wait to hear more about what they have actually done with the silicon. Jack Huynh is the Corporate Vice President & General Manager of the Semi-Custom Group at AMD since March 2014. His https://www.linkedin.com/in/jackhuynh profile says the following:



Note that he was the one that posted the Project Scarlett blog entry on the AMD site - https://community.amd.com/community/gaming/blog/2019/06/09/amd-powers-microsoft-project-scarlett which includes the following wording:



The bottom line is that AMD posted no such gaming blog entry after the PS5 Wired article or since, and they didn't post any such blog entry back when the Xbox One X was first announced or released either. Logically there does seem to be some serious custom work together here that both are very proud of, and the only way to ignore this completely is to somehow assume that Sony just didn't have their PR department work hard enough to get some additional things put out there by AMD on the PS5 work (that PR person should probably be let go then I guess). These public facts are far more relevant at this stage than just rumors of any kind on either side.

Well said. I'd advise anyone to go back to the Scorpio DF article and read the lengths they went to optimize that design. That mentality is at play, and probably deepened, for Scarlett.
 
Last edited:

dgrdsv

Member
Oct 25, 2017
12,024
Current devkits likely have Vega 10/20 in them which can easily be at 12-14 Tflops. Navi in the retail units won't though.
 

bcatwilly

Member
Oct 27, 2017
2,483
I have a guess for one of the areas of differentiation in the PS5 and Xbox Scarlett silicon for next generation that could end up being talked about in Digital Foundry videos and such. Both sides have made mention of 8K and 120 fps support, which is somewhat checking the box for HDMI 2.1 compliance. However, to even provide any options for such things and arguably even still dealing with 4K resolutions and 60 fps on many games that are demanding, some type of "checkerboarding" or similar approach is going to be in play.

  • PS5 - Digital Foundry and others have praised the approach to checkerboarding on the PS4 Pro, and I suspect that they will have some custom silicon (or "secret sauce" if you will) that provides support to an improved version of this intended to deal with resolutions all the way up to 8K as necessary.
  • Xbox Scarlett - Notice that Navi does not include hardware support for Variable Rate Shading (VRS) which was disappointing to many, and VRS is definitely another approach to deal with the issue of scaling to different resolutions like with checkerboarding. Microsoft has been working deeply on VRS with DirectX (https://devblogs.microsoft.com/directx/variable-rate-shading-a-scalpel-in-a-world-of-sledgehammers/), and I suspect that their custom silicon will support VRS.

I am not going to predict which hypothetical solution may end up being better, but this does seem to be a possible area of differentiation given no Navi support for VRS.
 
Oct 25, 2017
9,205
I have a guess for one of the areas of differentiation in the PS5 and Xbox Scarlett silicon for next generation that could end up being talked about in Digital Foundry videos and such. Both sides have made mention of 8K and 120 fps support, which is somewhat checking the box for HDMI 2.1 compliance. However, to even provide any options for such things and arguably even still dealing with 4K resolutions and 60 fps on many games that are demanding, some type of "checkerboarding" or similar approach is going to be in play.

  • PS5 - Digital Foundry and others have praised the approach to checkerboarding on the PS4 Pro, and I suspect that they will have some custom silicon (or "secret sauce" if you will) that provides support to an improved version of this intended to deal with resolutions all the way up to 8K as necessary.
  • Xbox Scarlett - Notice that Navi does not include hardware support for Variable Rate Shading (VRS) which was disappointing to many, and VRS is definitely another approach to deal with the issue of scaling to different resolutions like with checkerboarding. Microsoft has been working deeply on VRS with DirectX (https://devblogs.microsoft.com/directx/variable-rate-shading-a-scalpel-in-a-world-of-sledgehammers/), and I suspect that their custom silicon will support VRS.

I am not going to predict which hypothetical solution may end up being better, but this does seem to be a possible area of differentiation given no Navi support for VRS.

I really hope Sony isn't wasting any resources on 8K. It's a joke to think it will be relevant this generation.
 

anexanhume

Member
Oct 25, 2017
12,918
Maryland
Okay I realise I asked it in a dumb way but I think you know what I was asking.

I found this slide (taken with a nice dose of salt of course) which answers it for me:

0y47netvym331.jpg


I therefore revise my guess to 8-9TF for the consoles.

The reason I am resistant to that is because people are using it to go all revisionist history on their 12-14TF predictions.

I have a guess for one of the areas of differentiation in the PS5 and Xbox Scarlett silicon for next generation that could end up being talked about in Digital Foundry videos and such. Both sides have made mention of 8K and 120 fps support, which is somewhat checking the box for HDMI 2.1 compliance. However, to even provide any options for such things and arguably even still dealing with 4K resolutions and 60 fps on many games that are demanding, some type of "checkerboarding" or similar approach is going to be in play.

  • PS5 - Digital Foundry and others have praised the approach to checkerboarding on the PS4 Pro, and I suspect that they will have some custom silicon (or "secret sauce" if you will) that provides support to an improved version of this intended to deal with resolutions all the way up to 8K as necessary.
  • Xbox Scarlett - Notice that Navi does not include hardware support for Variable Rate Shading (VRS) which was disappointing to many, and VRS is definitely another approach to deal with the issue of scaling to different resolutions like with checkerboarding. Microsoft has been working deeply on VRS with DirectX (https://devblogs.microsoft.com/directx/variable-rate-shading-a-scalpel-in-a-world-of-sledgehammers/), and I suspect that their custom silicon will support VRS.

I am not going to predict which hypothetical solution may end up being better, but this does seem to be a possible area of differentiation given no Navi support for VRS.

MS patents straight up call out hardware approaches for VRS.

 

bcatwilly

Member
Oct 27, 2017
2,483
I really hope Sony isn't wasting any resources on 8K. It's a joke to think it will be relevant this generation.

They have both mentioned it at least, which is why I believe they will need some custom silicon to support it given that Navi by itself doesn't have VRS support. I don't think that either are really likely to push 8K too much, but the fact that Sony also sells TVs would be one reason to consider that they may feel some need or desire to push 8K TV sales.
 

Chamber

Member
Oct 25, 2017
5,281
Enhancements to checkerboard rendering and other reconstruction and upscaling techniques are important outside of 8K. I'd rather they not waste resources on native 4K either.
 

Putty

Double Eleven
Verified
Oct 27, 2017
933
Middlesbrough
110% 8k chat is because it's part of the 2.1 HDMI spec. Absolutely zero hardware silicon/work will be added to these machines in an effort to make possible.
 

Deleted member 10193

User requested account closure
Banned
Oct 27, 2017
1,127
15072327884s.jpg


Has anyone seen this? Base clock is minimum, Boost is ideal max, Game clock is an average achievable clock while playing games.
 

bcatwilly

Member
Oct 27, 2017
2,483
Enhancements to checkerboard rendering and other reconstruction and upscaling techniques are important outside of 8K. I'd rather they not waste resources on native 4K either.

I agree that the techniques I mentioned will be applicable to any possible 8K scaling (not as common) and everything below it too where it will be more common and useful.
 

Straffaren666

Member
Mar 13, 2018
84
Are they? This wasn't really clear in any of the docs released so far.

The bigger issue though is that they haven't really provided any reason for such design. Why would anyone want to run a workload on WGP instead of a CU or vice versa?

IMO, the most likely reason for sharing the LDS between two CUs is to be able to efficiently pass data between them. I suspect it's related to the new surface/primitive shaders. For instance, one CU doing the per vertex position/attribute shading and the other CU doing the per primitive processing/shading, using the LDS to pass processed data between the two shaders.
 
Status
Not open for further replies.