• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.
  • We have made minor adjustments to how the search bar works on ResetEra. You can read about the changes here.
Status
Not open for further replies.

DrKeo

Banned
Mar 3, 2019
2,600
Israel
More info about Oodle Textures and the blog data samples:
The 127 MB example is run at lambda=40. That's the upper limit of what I think is safe to use with manual inspection.

That test set is 89 MB BC7, 11.5 MB BC1, 11 MB BC3, 15 MB BC4

(sizes of uncompressed BCN files)

You're right that Kraken sometimes finds little compression on BC7, but it depends on the texture. There are several examples posted on the various pages that show that case, where Kraken (without Prep or RDO) gets over 7 bits per byte on BC7, very little compression until it gets some help from Oodle Texture.

This particular set of BC7's has a bunch of normal maps that have big areas of flat normals, and some character charts where the whole texture isn't used. It's a real set from a shipping game in 2019.

It was a real set from a 2019 game, but it had a lot of normal maps with big flat areas so they compressed really well.

chris 1515
 

EagleClaw

Member
Dec 31, 2018
10,699
Just to correct something. It's not that they designed a 12 channel SSD and more like they chose one.

Yes, standard SSDs usually have only 4 channels, and the more expensive ones tend to have 8 channels. That's not even particularly accurate as the SSD manufacturers usually go with the best channel layout to meet their intended speed targets. Its kinda likes balancing act between the number of channels and speed threshold of the individual nand chips.

Server grade SSDs typically have as much as 16 channels.

That balance though is what makes this interesting. If you take an 8 channel SSD, and try and hit speeds of 5.5GB/s, then you would need each chip to be able to maintain speeds of around 680MB/s. Drop that to 4 channels and you would need nand chips capable of hitting 1375MB/s each. And the higher the speeds you run your nand flash chips at the higher the temperature it is. You would need the higher-quality chips to reliably hit those higher speeds. Going with 12 channels and staying at 5.5GB/s means that the PS5 has chips that are running at around 450MB/s.

Thanks for the clarification.
Still nice to see that they put some thought into the "data storage design".

Last console generation was everything just about CPU, GPU, RAM and a tossed in 5200rpm HDD.
It was surely time to think about data streaming.
 

Lady Gaia

Member
Oct 27, 2017
2,479
Seattle
Isn't bandwidth always going to be a bottleneck until we somehow match on-die cache speeds?

More to the point, if we could realistically match latency and bandwidth available from cache there wouldn't be a need for cache in the first place. RAM speeds have been a bottleneck for a very, very long time. Heck, you could easily argue that the invention of registers was necessitated by the reality that a CPU is going to run rings around its storage. Always has, and always will.

Engineering is not a discipline whose goals is perfection. That's the realm of pure mathematics and theoretical physics. No, building things in the real world is an exercise in compromise. You can not like the compromises one design makes over another, but you can't argue in good faith that the simple act of making compromises suggests that a design is flawed. If that were the measure than all designs would be flawed by definition. No, it's suitability for purpose within constraints. Time and the consumer marketplace, not overzealous fans, will demonstrate the suitability of these designs for their intended use. It's a question of whether the manufacturability, reliability, and price points they hit and the software they can attract to their ecosystem lead to market success.

We'll probably still have a variety of opinions on the subject this time next year, but we'll also have the first few rounds of concrete data on that specific subject. In the meantime, I'm far more interested in what developers are able to do with the hardware than anything. The next couple of months should be quite fascinating.
 

Optmst

Member
Apr 9, 2020
471
A few of interesting quotes from Edge's UE5 coverage
Karis said:
"It's in the form of textures", "It's actually like, what are the texels of that texture that are acually landing on pixels in your view?", "It's stuff that's not occluded by something else. It's a very accurate algorithm"
Regarding next-gen-games install sizes
Liberi said:
"We'll have state-of-the-art compression algorithms"
Liberi indicated they already have their own compression algorithms and they're working to improve them
 

AegonSnake

Banned
Oct 25, 2017
9,566
A few of interesting quotes from Edge's UE5 coverage

Regarding next-gen-games install sizes

Liberi indicated they already have their own compression algorithms and they're working to improve them
tlou2 is 79GB. i have no doubt, ps5 games like rdr3 and whatever naughty dog makes next will blow past the 100gb limit of UHDs. i bet they will come in dual discs.

i hope you guys are playing tlou2. next gen is now. i say holy shit every hour or so. these graphics are crazy.
 

III-V

Member
Oct 25, 2017
18,827
i hope you guys are playing tlou2. next gen is now. i say holy shit every hour or so. these graphics are crazy.
Nearly every area is a handcrafted masterpiece.

When I am not shitting myself, sometimes I just walk around admiring the details, lighting, and sound.

Edit: shinobi602 has been posting some good screenshots. If you told me this was a PS5 I would not be disappointed.

 

AegonSnake

Banned
Oct 25, 2017
9,566
Nearly every area is a handcrafted masterpiece.

When I am not shitting myself, sometimes I just walk around admiring the details, lighting, and sound.

Edit: shinobi602 has been posting some good screenshots. If you told me this was a PS5 I would not be disappointed.


yep. just the sheer number of different areas is insane. it explains why the download is 79GB.

btw, how did they do the mirrors without ray tracing? its eating away at me inside. lol
 
Oct 31, 2019
411
I know, Cerny was very obviously not talking about the plug - because the PS5 won't have one. It's a flawed design, because they forgot to put a plug in because they had to spend the plug money on working on the backward compatibility.

I'm skeptical the PS5 can actually turn on. Sure in the optimum situation it might be 10.28TF, but since it doesn't come with a plug it will actually only be 0TF, since it's unlikely to be able to even turn on.
Lolz. Perfect amount of sarcasm to pinpoint exactly the bad faith posts.

Seriously I don't care for BC beyond PS4, almost all my games are on PS4 and what games I have on PS3 are all physical and have sentimental values beyond their utilitarian purpose of being a game, and thus I didn't sell them along with my PS3 console and lots of other physical disc games. I know for a fact people like me, 'mainstream BC guyzzz' are the majority and 'look at my walls and walls of games collection' people are not even 1% of gaming community. I am happy with PS4 BC alone but if full BC comes I'll be happy for you guys however I wouldn't really wait on it.
 

JasoNsider

Developer
Verified
Oct 25, 2017
2,149
Canada
yep. just the sheer number of different areas is insane. it explains why the download is 79GB.

btw, how did they do the mirrors without ray tracing? its eating away at me inside. lol

Most likely just the age old separate camera rendering to texture. It's not cheap, so the fact that it looks this good on a base PS4 is, again, incredible.
 

III-V

Member
Oct 25, 2017
18,827
yep. just the sheer number of different areas is insane. it explains why the download is 79GB.

btw, how did they do the mirrors without ray tracing? its eating away at me inside. lol
Most likely just the age old separate camera rendering to texture. It's not cheap, so the fact that it looks this good on a base PS4 is, again, incredible.
Worth it. I spent waaay to much time making faces in the mirror 😂
 

Liabe Brave

Professionally Enhanced
Member
Oct 27, 2017
1,672
Glad you agree that your concern is likely misplaced. The game as released was completed by business entities that still exist and could support its transition to new hardware if necessary. (Not to mention that it's also possible that all the exact same people worked on it through both periods of active development.)

My statement was about small polygons. Small polygons aren't relevant to what Cerny had said if they are handled via shader rasterization.
Your original statement was about small polys, but your second statement was about "bigger polygons". Hence my (continued) confusion. But ignoring that and assuming we're talking solely about small polys, you still haven't made your case. Unreal requires hardware rasterizer performance to fill in for Nanite compute rasterization. Mr. Cerny's stated preference for clockspeed is beneficial to future tech of this sort, not irrelevant to it.

Less then 2000 ?
How can they say that 1000s of games will work when there are less then 2000 ?
Because the already-BC list of 360 and original Xbox games would still take the total over 2000. However, I was incorrect about how many Xbox One games there are; see below.

I called it nonsense because i truly believe there will be no "Dragon Age" or "GTA6" with 120fps 8K.
For sure. But neither Sony nor Microsoft has claimed that there will be 8K120 games. Just that both of those things will be supported individually. (They also haven't claimed that either metric, even separately, will be common.)

I didn't know that there is a 120fps game on PS4, never heard about that.
That's because the HDMI out won't export it, so it's impossible for TVs to display. However, PSVR hooks up using two HDMI cables, and doesn't have this limitation. It has multiple games that run at 120fps (some on standard and Pro, some on Pro only).

I don't like using Wikipedia, because it always contains mistakes for extensive lists like this. For example, even just a cursory glance shows that The Evil Within and Psycho-Pass are counted separately. And I'm sure there are other errors. For this reason, I used Xbox.com to get the count.

However, it appears that--for whatever weird reason--the official Xbox site's "list of all games" doesn't include all the games on Xbox One. Some games can only be found by searching on the Microsoft.com store site. Unfortunately, that site has no way to see a comprehensive list of all games. So the true number of catalog titles is unknown. But I do agree my original statement was incorrect. The true library will be closer to 2500 than 1800 titles.

Either way, I still expect there's no shenanigans with Xbox's announced "thousands of titles" for BC. The vast majority of those ~2500 games will be playable on the next Xbox.

Isn't bandwidth always going to be a bottleneck until we somehow match on-die cache speeds?
Sure. The question is whether that will be the first bottleneck you encounter. With how much improvement there's been to CPU, GPU, and I/O with the upcoming consoles, the (comparatively) small increases to bandwidth seem like they could become a more common hindrance.

It was a real set from a 2019 game, but it had a lot of normal maps with big flat areas so they compressed really well.
I believe you're trying to imply that this makes the results of Kraken without RDO somehow misleading, but that doesn't follow. This isn't a toy example, or a cherrypicked scenario. This is a collection of real game texture data. It happened to include more compressible data than usual, but that just means this is a situation any compressor might well encounter during real world use. Of course such a 45% size reduction wouldn't be sustained throughout a game. But if realistic peaks can hit that high, it strongly suggests that overall reductions of 20-30% as previously stated are plausible. Obviously, BC7 compression isn't limited to single digits as you had claimed.

The mixture of the data set also tells somewhat against another of your arguments against chris 1515 's interpretation. This was that in upcoming games the proportionate use of BC7 over other BCn formats will rise greatly, meaning what Sony presented wasn't relevant to future BCn ratios. In fact, here we already see in a 2019 release that 70% of texture data were BC7. There's still room for more, of course, and this does mean that BC7Prep could be increasingly important, as you suggested. But at the same time it makes it mildly more plausible that the compression savings quoted by Sony didn't include RDO. (Which is not to say that it definitely didn't; RDO savings may well have been part of the metric.)
 

No_Style

Member
Oct 26, 2017
1,795
Ottawa, Canada
Sure. The question is whether that will be the first bottleneck you encounter. With how much improvement there's been to CPU, GPU, and I/O with the upcoming consoles, the (comparatively) small increases to bandwidth seem like they could become a more common hindrance.

I have no idea what the answer to this is, but what was the last console that didn't encounter bandwidth issues as the first bottleneck? Original PS4?
 

mhayze

Member
Nov 18, 2017
555
What I don't understand is, given the same basic building blocks and same manufacturing processes, what about the XSX design (with it's own clearly very high quality cooling solution) would necessitate lower clocks compared to the PS5?
That fixed power budget stuff is not really a way around the basic limits of the underlying process and circuit design. Power and thermal throttling are things for sure, and more CUs = power power and heat, but there's really only that as far as I know that would be affecting clocks. Because of the constant varying of clocks per core based on power and thermals, total computation in a unit time and clocks are really an "area under the curve" style reading, rather than something that's fully expressed with a max and average, or base and turbo. So again, why would the PS5 be able to get away with higher constant clocks than the XSX? Or will it not really be able to?
Anyone able to shed some light on this?
 
Oct 27, 2017
3,894
ATL
Holding out on playing TLOU: Part 2 until the PS5 launches is going to be extra tough. I'm holding the line though...even if I'm alone haha.

Anyway, with Lumen in UE5 providing high quality real time global illumination with relatively high performance, is this a sign that baked lighting will eventually go away by the end of the coming console generation?

When it comes to visuals, I'm highly excited to see what Bluepoint will achieve with the Demon's Souls remake. I know we've only seen the tip of the iceberg so far. A quote from their site, "Our latest project is the largest in our history, and aims to define the visual benchmark for the next generation of gaming hardware."
 

褲蓋Calo

Alt-Account
Banned
Jan 1, 2020
781
Shenzhen, China
What I don't understand is, given the same basic building blocks and same manufacturing processes, what about the XSX design (with it's own clearly very high quality cooling solution) would necessitate lower clocks compared to the PS5?
That fixed power budget stuff is not really a way around the basic limits of the underlying process and circuit design. Power and thermal throttling are things for sure, and more CUs = power power and heat, but there's really only that as far as I know that would be affecting clocks. Because of the constant varying of clocks per core based on power and thermals, total computation in a unit time and clocks are really an "area under the curve" style reading, rather than something that's fully expressed with a max and average, or base and turbo. So again, why would the PS5 be able to get away with higher constant clocks than the XSX? Or will it not really be able to?
Anyone able to shed some light on this?
It probably is just "the team is happy with what they've got".
 

Lady Gaia

Member
Oct 27, 2017
2,479
Seattle
What I don't understand is, given the same basic building blocks and same manufacturing processes, what about the XSX design (with it's own clearly very high quality cooling solution) would necessitate lower clocks compared to the PS5?

There's simply not enough information available to give an authoritative answer. Still, there's always room for speculation. To start with, we can't assume that the two have equally effective cooling systems. Sony had an interesting patent come to light a while back that involved cooling the silicon die from both sides, so that's one possible example of a differentiator. Are they doing so? We don't know yet.

That fixed power budget stuff is not really a way around the basic limits of the underlying process and circuit design. Power and thermal throttling are things for sure, and more CUs = power power and heat, but there's really only that as far as I know that would be affecting clocks.

The difference in CU count shouldn't be waved away. Depending on how steep the power curve is, I'd estimate the PS5's GPU likely generates 20-30% more heat at its default clock speed. The CPU will be generating slightly less, and there's no meaningful way to estimate what the whole die heat generation will be but assuming the ceiling is around 25% higher - more effective cooling could go a long way.

Still, what else is there? We also had some suggestions surrounding the GitHub leaks that there were a lot of re-spins of the PS5 silicon. Looking at real-world samples can tell you more about thermal characteristics to allow changes in layout or alternative expressions of high-level designs to more effectively distribute heat. Did they do so? It seems pretty likely they did, as the number of revisions seemed unusually high for a design that has likely undergone rigorous validation using traditional simulation methods. Still, it's unknown.

Because of the constant varying of clocks per core based on power and thermals, total computation in a unit time and clocks are really an "area under the curve" style reading, rather than something that's fully expressed with a max and average, or base and turbo. So again, why would the PS5 be able to get away with higher constant clocks than the XSX? Or will it not really be able to? Anyone able to shed some light on this?

The question what clock speed can I run at safely regardless of workload is quite different from what clock speed can I run workload X at. Mark Cerny discussed this pretty overtly, and it's far from a novel concept. I've been directly involved in projects that applied similar observations for different end goals, and I can assure you the basic premise is sound. Turning the clock speed down briefly to deal with an unusually high power consumption workload can definitely yield significantly higher average clocks.
 
Last edited:

BreakAtmo

Member
Nov 12, 2017
12,838
Australia

behOemoth

Member
Oct 27, 2017
5,627
I have no idea what the answer to this is, but what was the last console that didn't encounter bandwidth issues as the first bottleneck? Original PS4?
Either the PS2 or PS4.


What I don't understand is, given the same basic building blocks and same manufacturing processes, what about the XSX design (with it's own clearly very high quality cooling solution) would necessitate lower clocks compared to the PS5?
That fixed power budget stuff is not really a way around the basic limits of the underlying process and circuit design. Power and thermal throttling are things for sure, and more CUs = power power and heat, but there's really only that as far as I know that would be affecting clocks. Because of the constant varying of clocks per core based on power and thermals, total computation in a unit time and clocks are really an "area under the curve" style reading, rather than something that's fully expressed with a max and average, or base and turbo. So again, why would the PS5 be able to get away with higher constant clocks than the XSX? Or will it not really be able to?
Anyone able to shed some light on this?
The consoles won't thermal throttle, but shut down the system.
To your last point, I don't think that the GPU is constantly rendering at full power over the course of one frame. For instance vsync works in a way that your GPU won't output your rendered frame even it is finished in shorter times than 33ms or 16ms (30fps and 60fps respectively). The output will way wait till it's synchronized with your display to avoid tearing. The dynamic system of the PS5 can reduce the clock rate within that frame to save power.
 

Chris_Rivera

Banned
Oct 30, 2017
292
I don't know. After seeing the 2018 lou2 demo comparison, it's pretty clear how much room for improvement there is in gen hardware. Playing that at 60fps would feel next gen. PS4 pro version is beautifully crafted, but the limits show in things like the resolution and lighting downgrades.
 

chris 1515

Member
Oct 27, 2017
7,074
Barcelona Spain
blog.quanticdream.com

QD Reacts: PlayStation Round 5! | Quantic Dream

Sony has finally revealed its long-awaited next gen home console. We couldn’t resist asking some…Read More

Quantic dreams react to the PS5.

EDIT:
forum.beyond3d.com

Nvidia Ampere Discussion [2020-05-14]

I didn't realize they boosted so far beyond their listed clock speeds. I have a gtx1660, but I've never really payed attention to the clocking behaviour. It varies from card to card a lot, even 100s of MHzs depending on cooling and other factors. And of course, different loads get different boosts

offtopic Ampere pro card
 
Last edited:

DrKeo

Banned
Mar 3, 2019
2,600
Israel
Your original statement was about small polys, but your second statement was about "bigger polygons". Hence my (continued) confusion. But ignoring that and assuming we're talking solely about small polys, you still haven't made your case. Unreal requires hardware rasterizer performance to fill in for Nanite compute rasterization. Mr. Cerny's stated preference for clockspeed is beneficial to future tech of this sort, not irrelevant to it.
It was late, I meant small polygons, not "bigger polygons", that's a typo. Bigger polygons will use rasterizers, rasterizers are faster on PS5, no doubt about it, but that's wasn't what I was referring to. I was talking about Cerny's comment on small polygons and CU efficiency which will be mitigated by shader rasterizers like in UE5. So obviously bigger polygons aren't relevant to the subject.

I believe you're trying to imply that this makes the results of Kraken without RDO somehow misleading, but that doesn't follow. This isn't a toy example, or a cherrypicked scenario. This is a collection of real game texture data. It happened to include more compressible data than usual, but that just means this is a situation any compressor might well encounter during real world use. Of course such a 45% size reduction wouldn't be sustained throughout a game. But if realistic peaks can hit that high, it strongly suggests that overall reductions of 20-30% as previously stated are plausible. Obviously, BC7 compression isn't limited to single digits as you had claimed.

The mixture of the data set also tells somewhat against another of your arguments against chris 1515 's interpretation. This was that in upcoming games the proportionate use of BC7 over other BCn formats will rise greatly, meaning what Sony presented wasn't relevant to future BCn ratios. In fact, here we already see in a 2019 release that 70% of texture data were BC7. There's still room for more, of course, and this does mean that BC7Prep could be increasingly important, as you suggested. But at the same time it makes it mildly more plausible that the compression savings quoted by Sony didn't include RDO. (Which is not to say that it definitely didn't; RDO savings may well have been part of the metric.)
I asked Fabian Giesen about the data set because we didn't have enough info about it for the sake of discussion, he answered and I brought the answer here, I wasn't trying to "stick it to Chris" or something. But if you want to dive deeper into this, then this is indeed a real-world scenario, we knew that already because Fabian already said it's from a real game, but it is an abnormal set of BC7 that contains half flat normal maps and character textures with empty space which will compress better on BC7 that the average image. So it is real-world and something that will happen, but I wouldn't take it as "the typical case". Cerny also talked about 22GB/s which is even greater compression than that and I'm sure it's also a real-world example which happens, but that's also not exactly the typical case.

All I was telling Chris a few days ago was - don't expect the average case to compress that well because even without seeing the actual detail of the texture pack, the fact that Kraken alone had compressed BC7 textures so well (when even you quoted Richard Geldreich's 20%-30% figure, so you are aware that 45% is high), indicated that it isn't exactly the average case.
 

Mac Dalton

User requested ban
Banned
Oct 29, 2017
286
blog.quanticdream.com

QD Reacts: PlayStation Round 5! | Quantic Dream

Sony has finally revealed its long-awaited next gen home console. We couldn’t resist asking some…Read More

Quantic dreams react to the PS5.

Sounds like they don't have access to the Dual Sense controller ?

Ronan QUOTE "As for the controller, I need to see how it feels in my hands, because there's a force feedback on the triggers that could be really cool if it's cleverly implemented in games. Wait and see, then!"
 

SharpX68K

Member
Nov 10, 2017
10,518
Chicagoland
For PS5 Pro you mean ? I doubt AMD tech will still be called RDNA in 7 years.

I was actually thinking of PS6. However, 72 CUs would also make sense (to me) for a PS5 Pro if there is one. Certainly a PS5 Pro would still be using some generation of RDNA. Lets assume RNDA 4, given that RDNA 3 is due out in late 2021 or early 2022, and a PS5 Pro would arrive late 2023 at the soonest. Keep in mind PS4 Pro used Polaris, which was released for PCs a few months before PS4 Pro was out.

And you're probably right, by 2026-2027 AMD will have probably made a fresh GPU architecture that's not called RDNA anymore.
 

Grayson

Attempted to circumvent ban with alt account
Banned
Aug 21, 2019
1,768
Excuse the question but have they ever actually confirmed gyro in the DualSense?
 

BreakAtmo

Member
Nov 12, 2017
12,838
Australia
I was actually thinking of PS6. However, 72 CUs would also make sense (to me) for a PS5 Pro if there is one. Certainly a PS5 Pro would still be using some generation of RDNA. Lets assume RNDA 4, given that RDNA 3 is due out in late 2021 or early 2022, and a PS5 Pro would arrive late 2023 at the soonest. Keep in mind PS4 Pro used Polaris, which was released for PCs a few months before PS4 Pro was out.

And you're probably right, by 2026-2027 AMD will have probably made a fresh GPU architecture that's not called RDNA anymore.

I agree that the PS5 Pro will have 72CUs - I'm actually hoping that by then, AMD will have made chiplet GPUs, so it can essentially have 2 PS5 GPUs. Maybe with tensor cores as well, to apply something like DLSS?
 

DrKeo

Banned
Mar 3, 2019
2,600
Israel
I agree that the PS5 Pro will have 72CUs - I'm actually hoping that by then, AMD will have made chiplet GPUs, so it can essentially have 2 PS5 GPUs. Maybe with tensor cores as well, to apply something like DLSS?
I'm waiting for the day that chiplets will become a thing, it will change everything in the GPU market and consoles too.
 

DrKeo

Banned
Mar 3, 2019
2,600
Israel
Do we have any hard facts on when they will show up? Do you think they would be ready for a PS5 Pro in 2023/4?
PS6 for sure IMO, Pro is less likely but maybe. We don't have a concrete road map but at least from NVIDIA's side, rumors are that their next architecture, Hopper, the one after Ampere which comes out this year, will use chiplets. AMD is already using chiplets in their Ryzen 3000 series and they announced that they will use chiplets in GPUs in the future, but it is unknown when it will happen. The same goes for Intel, announced but unknown when. Will it take 2 years? 5 years? No idea.
 

sncvsrtoip

Banned
Apr 18, 2019
2,773
tlou2 is 79GB. i have no doubt, ps5 games like rdr3 and whatever naughty dog makes next will blow past the 100gb limit of UHDs. i bet they will come in dual discs.

i hope you guys are playing tlou2. next gen is now. i say holy shit every hour or so. these graphics are crazy.
preordered but got copy only today :|
 

ShapeGSX

Member
Nov 13, 2017
5,228
Still, what else is there? We also had some suggestions surrounding the GitHub leaks that there were a lot of re-spins of the PS5 silicon. Looking at real-world samples can tell you more about thermal characteristics to allow changes in layout or alternative expressions of high-level designs to more effectively distribute heat. Did they do so? It seems pretty likely they did, as the number of revisions seemed unusually high for a design that has likely undergone rigorous validation using traditional simulation methods. Still, it's unknown.

No, they would not have dried alternative layouts just to change thermals. Changing the layout significantly is a difficult and time consuming task. The thermals would have been modeled long before tape-out of the first chip. And what are you going to do? Swap one compute unit for another? There's nowhere to move anything.

I suspect that the revisions were for bug fixes or to get timing to converge under new conditions. For example, (hmmm) a higher clock speed/higher temperature.

So many people outside the industry think of the clock speed limit as just a cooling problem. It is not. You have to design the paths between sequential memory elements to run at a particular clock speed. If you have too many logic gates or they're too small, or the wires between them are too long or too skinny you won't get to your target clock speed. And there are literally trillions (quadrillions?) of these paths snaking all over each chip. You fix each "worst path" in each block until it meets your clock speed goal. That's not even getting into getting the logic changed to meet timing. Or worse, the architecture.
 

Kenzodielocke

Member
Oct 25, 2017
12,851
Question, PS4 Pro doesn't support a 1440p monitor the way it should, Xbox One X, does.

How will next gen handle odd resolutions? We know they both are 2.1, so they should take advantage of freesync? I am looking to buy a WQDD/UWHQD (2560x1440/3440x1440) and was wondering if they even can do this.
 

Sekiro

Member
Jan 25, 2019
2,938
United Kingdom
previous thread: https://www.resetera.com/threads/pl...cture-deep-dive-ot-secret-agent-cerny.175780/

TVUrI80.png

NyaDhyL.png
mama I made it.
 

MrKlaw

Member
Oct 25, 2017
33,064
What I don't understand is, given the same basic building blocks and same manufacturing processes, what about the XSX design (with it's own clearly very high quality cooling solution) would necessitate lower clocks compared to the PS5?
That fixed power budget stuff is not really a way around the basic limits of the underlying process and circuit design. Power and thermal throttling are things for sure, and more CUs = power power and heat, but there's really only that as far as I know that would be affecting clocks. Because of the constant varying of clocks per core based on power and thermals, total computation in a unit time and clocks are really an "area under the curve" style reading, rather than something that's fully expressed with a max and average, or base and turbo. So again, why would the PS5 be able to get away with higher constant clocks than the XSX? Or will it not really be able to?
Anyone able to shed some light on this?

XSX isn't going for lower clocks than PS5 - PS5 is going for higher clocks than a 'normal' approach.

XSX does the 'pick a clock speed that will work with your cooling system no matter what'.

Sony looks to have done more the 'pick a clock speed and cooling that will handle 95-97% of profiled workflows in real world game engines for PS4 (and extrapolated for PS5) and clock down to stay within thermal window for the outlier tasks' - that allows them to pick a higher base speed.

I'd fully expect MS to take a similar approach - possibly even with XSX-X in mid-gen
 

dgrdsv

Member
Oct 25, 2017
11,885
I agree that the PS5 Pro will have 72CUs - I'm actually hoping that by then, AMD will have made chiplet GPUs, so it can essentially have 2 PS5 GPUs. Maybe with tensor cores as well, to apply something like DLSS?
Chiplets won't work well for graphics without some serious s/w side changes (moving to a 100% RT from rasterization for example).
Which is also why we won't get an upgrade based on chiplets to a monolithic APU system.
 

Deleted member 1589

User requested account closure
Banned
Oct 25, 2017
8,576
I agree that the PS5 Pro will have 72CUs - I'm actually hoping that by then, AMD will have made chiplet GPUs, so it can essentially have 2 PS5 GPUs. Maybe with tensor cores as well, to apply something like DLSS?
My uneducated guess would be that a PS5 Pro might go with a 54CU (6 CUs deactivated) with 3 CUs in a workgroup.
 

Hey Please

Avenger
Oct 31, 2017
22,824
Not America
I have a question:

Is PS5's Primitive Shader functionally the same as (DX12U) Mesh Shader? Is the former lacking any form programming flexibility in comparison to the latter?
 

Pheonix

Banned
Dec 14, 2018
5,990
St Kitts
Question, PS4 Pro doesn't support a 1440p monitor the way it should, Xbox One X, does.

How will next gen handle odd resolutions? We know they both are 2.1, so they should take advantage of freesync? I am looking to buy a WQDD/UWHQD (2560x1440/3440x1440) and was wondering if they even can do this.
Remains to be seen, would be weird i sony still doesn't support that output resolution natively though.
XSX isn't going for lower clocks than PS5 - PS5 is going for higher clocks than a 'normal' approach.

XSX does the 'pick a clock speed that will work with your cooling system no matter what'.

Sony looks to have done more the 'pick a clock speed and cooling that will handle 95-97% of profiled workflows in real world game engines for PS4 (and extrapolated for PS5) and clock down to stay within thermal window for the outlier tasks' - that allows them to pick a higher base speed.

I'd fully expect MS to take a similar approach - possibly even with XSX-X in mid-gen
I expect the same thing. Sony's approach can get you better performance from any given silicon and particularly better suited for console (fixed) hardware.
I have a question:

Is PS5's Primitive Shader functionally the same as (DX12U) Mesh Shader? Is the former lacking any form programming flexibility in comparison to the latter?
Its the same thing.
 
Status
Not open for further replies.