• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.
  • We have made minor adjustments to how the search bar works on ResetEra. You can read about the changes here.

nib95

Contains No Misinformation on Philly Cheesesteaks
Banned
Oct 28, 2017
18,498
i think the ps5 gpu clock is load-dependent, devs will simply not bother coding for the worst case scenario and clock it much lower than 2.23 ghz to avoid instances where all of a sudden the gpu detects a major spike in the gpu or worse the cpu load, and decides to downclock it leaving the cpu starved of the clocks it needs to run crucial A.I, game logic and other physics related tasks that need to run on the cpu.

what cerny is asking devs to do is to go line by line through the code, scene by scene, firefight after firefight and playtest their game for the worst case scenario just in case the cpu clocks drop and all of a sudden, enemy a.i, npc a.i and other logic starts to lag because the system simply couldnt handle it. for graphics related tasks, devs might just implement dynamic scaling but again because the apu is throttled by activity, the devs will have to look for which part of the code is causing spikes. And then devs would need to lower clocks based on clock downgrades every time.

i highly doubt many third party devs would bother coding for worst case scenarios. they will clock the gpu at 2.0 ghz or whatever runs most stable without fluctuations and call it a day.

Developers already sort of account for this type of thing with variable resolutions, eg that reduce the resolution when the hardware is being pushed too hard and/or frame rates are likely to tank. I'm guessing developers will simply deal with variable clocks in a similar way, with engines that are designed with variability in mind to account for such changes or load limitations.
 

gofreak

Member
Oct 26, 2017
7,740
i think the ps5 gpu clock is load-dependent, devs will simply not bother coding for the worst case scenario and clock it much lower than 2.23 ghz to avoid instances where all of a sudden the gpu detects a major spike in the gpu or worse the cpu load, and decides to downclock it leaving the cpu starved of the clocks it needs to run crucial A.I, game logic and other physics related tasks that need to run on the cpu.

what cerny is asking devs to do is to go line by line through the code, scene by scene, firefight after firefight and playtest their game for the worst case scenario just in case the cpu clocks drop and all of a sudden, enemy a.i, npc a.i and other logic starts to lag because the system simply couldnt handle it. for graphics related tasks, devs might just implement dynamic scaling but again because the apu is throttled by activity, the devs will have to look for which part of the code is causing spikes. And then devs would need to lower clocks based on clock downgrades every time.

i highly doubt many third party devs would bother coding for worst case scenarios. they will clock the gpu at 2.0 ghz or whatever runs most stable without fluctuations and call it a day.

Nah, they'll let it run as is, let the machine make the most of whatever load is being thrown at it, and for times where there's a contention on priority for clocks between the two, they'll set a profile to indicate their preference. The developer can tell the machine ahead of time what the tie-breaker is in cases where the machine does have to compromise and isn't sure what's best for the game.

That's my take between what Cerny said, between what Alex at DF wrote about profiles etc.

They don't have to worry about 'just in case' - it either drops or won't, and that's deterministic. They don't have to worry about a 'just in case' something changes from what they themselves have observed. Devs already have to run tests and playtests for performance - PS5 is no different in that regard. The only place where it does differ is that where the machine can't make the best choice for the software AND where there is a performance problem, the developer's optimisation space may encompass both CPU and GPU - which has plusses as well as minuses as far as i can see.

For example, in this scenario - a heavy cpu and a heavy gpu workload, where framerate is not acceptable and you're roughly equally bound by both CPU and GPU performance, on a fixed frequency system, you may have to tackle the frametime on both the cpu and the gpu sides. On PS5, it's possible optimising one side will unlock enough clock headroom for the other to automatically bring down the other processor's frametime to an acceptable level in tandem, and without any further intervention/work - to kill 2 birds with one stone.
 
Last edited:

lukeskymac

Banned
Oct 30, 2017
992
LOL at this comparison.

PS5 - 10.28 Tflops
2070 - 7.5 Tflops
2070 Super - 9.1 Tflops

Xbox Series X - 12.15 Tflops
2080 Super - 11.1 Tflops
2080 Ti - 14.2 Tflops

Let's ignore for the moment that these are entirely different architectures, that Turin Tflops are a bit different, that Cerny himself said only a 2% clockspeed drop would be required to gain back 10% in power (which would still keep the PS5 at or over 10 Tflops), and that he expects the clockspeed to stay peak max the majority of the time, why on Earth are you assuming the PS5's performance is going to be equivilant to Turin 7-9 Tflops, whilst simultaneously assuming the XSX is going to be equivilant to Turin 11-14 Tflops?

You are massively under selling the PS5 in this laughable comparison.

They shortcircuited TFLOPs difference to game performance difference, would be my guess. Also rounded up the XSX advantage massively I guess (2080 performs 18% better than 2070, but 2080Ti performs 25% better than 2070 SUPER).

What?

It's literally just that with most of the variability removed by the differences I specified. It's an evolved form of the exact same concept.

I think we'll both agree that this disagreement stems from different personal opinions on what "the same concept" is, then.
 

AegonSnake

Banned
Oct 25, 2017
9,566
Oh come on. This is just silly. I'm beginning to think your doom and gloom persona is just for kicks. You're being ridiculous.
good talk.
Developers already sort of account for this type of thing with variable resolutions, eg that reduce the resolution when the hardware is being pushed too hard and/or frame rates are likely to tank. I'm guessing developers will simply deal with variable clocks in a similar way, with engines that are designed with variability in mind to account for such changes or load limitations.
but they dont. this kind of variable clock logic have never been used in PCs, let alone consoles. That's what everyone has been saying. it is NOT like what we have seen in games before. PCs dont downclock based on load, they just drop frames and run hotter. the way cerny described it, he clearly says its based on activity.

it would be hilarious if an A.I just freezes in the middle of the game because the game downclocked the cpu.
 

M3rcy

Member
Oct 27, 2017
702
I think we'll both agree that this disagreement stems from different personal opinions on what "the same concept" is, then.

The concept is, "clock up to a limit unless a condition exists to prevent that and then clock lower if there is no other means to resolve that condition" and it is exactly the same for both. The difference is in what the conditions are that cause clocks to lower, the mechanisms available to avoid having to lower clocks and, per Cerny's description, the degree that clocks can lower. The same differences can be seen in the various forms of Nvidia's boost as it has evolved over time, though, and you wouldn't call them a different concept.
 

gofreak

Member
Oct 26, 2017
7,740
good talk.

but they dont. this kind of variable clock logic have never been used in PCs, let alone consoles. That's what everyone has been saying. it is NOT like what we have seen in games before. PCs dont downclock based on load, they just drop frames and run hotter. the way cerny described it, he clearly says its based on activity.

it would be hilarious if an A.I just freezes in the middle of the game because the game downclocked the cpu.

That's...not how games work. The game would freeze, not some specific component.

A game can freeze - have frame dips or stutter - on any hardware if you push it too far.

We're not in a world, pre-variable clocks, where devs could just throw whatever at the hardware and not worry...Devs have to do performance testing.

What's different in PS5 is that there's a certain range of shared performance between the CPU and GPU so that it can adapt to different software's 'balance'. You have to bear in mind that up to a certain point - probably within a range of some % of total performance - the bound in either cpu or gpu can be affected by changes made to the other.

I think in some cases this will change how devs approach performance optimisation - it could have upsides aswell as downsides - but the concept of testing and firefighting performance bounds is not some brand new world. In the end, every tested target platform is checked for acceptable performance, and if bounds are found, they're addressed until performance is acceptable again (or not, and you get performance problems!).

What nib said is true by the way - games with dynamic resolution do effectively adapt to load. If the frametime exceeded a certain threshold last frame, the resolution gets throttled down. They don't just let frametime go to the pits.
 

AegonSnake

Banned
Oct 25, 2017
9,566
Nah, they'll let it run as is, let the machine make the most of whatever load is being thrown at it, and for times where there's a contention on priority for clocks between the two, they'll set a profile to indicate their preference. The developer can tell the machine ahead of time what the tie-breaker is in cases where the machine does have to compromise and isn't sure what's best for the game.

That's my take between what Cerny said, between what Alex at DF wrote about profiles etc.

They don't have to worry about 'just in case' - it either drops or won't, and that's deterministic. They don't have to worry about a 'just in case' something changes from what they themselves have observed. Devs already have to run tests and playtests for performance - PS5 is no different in that regard. The only place where it does differ is that where the machine can't make the best choice for the software AND where there is a performance problem, the developer's optimisation space may encompass both CPU and GPU - which has plusses as well as minuses as far as i can see.

For example, in this scenario - a heavy cpu and a heavy gpu workload, where framerate is not acceptable and you're roughly equally bound by both CPU and GPU performance, on a fixed frequency system, you may have to tackle the frametime on both the cpu and the gpu sides. On PS5, it's possible optimising one side will unlock enough clock headroom for the other to automatically bring down the other processor's frametime to an acceptable level in tandem, and without any further intervention/work - to kill 2 birds with one stone.
That profile is exactly what I am talking about. The profile will be 2.0 ghz or whatever the most consistent clocks are that dont cause the gpu to downgrade clocks.

I know devs already playtest, but how many times we have seen games this and the gen before and well every gen before that where devs dont do that kind of optimization, and simply let frames drop. We just saw it in games like Control where base consoles were dipping down to 10 fps. and this was on fixed clocks. they knew the limits of the system, they must have seen the issues during playtests for performance and they did not bother optimizing.

What Cerny is asking devs to do now is to tackle it on not just a scene by scene basis, but also frame by frame, and if they dont do that, the consequences are dire. they fixed clocks they relied on before, and now they will have no choice but to account for that. unless like you said, they create a profile with lower clocks and stick to that.
 

-Le Monde-

Avenger
Dec 8, 2017
12,613
good talk.

but they dont. this kind of variable clock logic have never been used in PCs, let alone consoles. That's what everyone has been saying. it is NOT like what we have seen in games before. PCs dont downclock based on load, they just drop frames and run hotter. the way cerny described it, he clearly says its based on activity.

it would be hilarious if an A.I just freezes in the middle of the game because the game downclocked the cpu.
You're watching too much WW. 😂
 

Mubrik_

Member
Dec 7, 2017
2,727
That profile is exactly what I am talking about. The profile will be 2.0 ghz or whatever the most consistent clocks are that dont cause the gpu to downgrade clocks.

I know devs already playtest, but how many times we have seen games this and the gen before and well every gen before that where devs dont do that kind of optimization, and simply let frames drop. We just saw it in games like Control where base consoles were dipping down to 10 fps. and this was on fixed clocks. they knew the limits of the system, they must have seen the issues during playtests for performance and they did not bother optimizing.

What Cerny is asking devs to do now is to tackle it on not just a scene by scene basis, but also frame by frame, and if they dont do that, the consequences are dire. they fixed clocks they relied on before, and now they will have no choice but to account for that. unless like you said, they create a profile with lower clocks and stick to that.

I think you should pause a bit
Take some time and understand what you're discussing before making this conclusion.
 

AegonSnake

Banned
Oct 25, 2017
9,566
That's...not how games work. The game would freeze, not some specific component.

A game can freeze - have frame dips or stutter - on any hardware if you push it too far.

We're not in a world, pre-variable clocks, where devs could just throw whatever at the hardware and not worry...Devs have to do performance testing.

What's different in PS5 is that there's a certain range of shared performance between the CPU and GPU so that it can adapt to different software's 'balance'. You have to bear in mind that up to a certain point - probably within a range of some % of total performance - the bound in either cpu or gpu can be affected by changes made to the other.

I think in some cases this will change how devs approach performance optimisation - it could have upsides aswell as downsides - but the concept of testing and firefighting performance bounds is not some brand new world. In the end, every tested target platform is checked for acceptable performance, and if bounds are found, they're addressed until performance is acceptable again (or not, and you get performance problems!).

What nib said is true by the way - games with dynamic resolution do effectively adapt to load. If the frametime exceeded a certain threshold last frame, the resolution gets throttled down. They don't just let frametime go to the pits.
Thats even worse. Games crashing just because the gpu throttled down clocks is going to be a nightmare.

Games are also become 50-100 hours long. they are becoming more and more sandboxish. no amount of playtesting can account for that worst case scenario where cerny's gpu will downclock all of a sudden. im guessing it will tell the system before it does and maybe devs will have a head start and turn down resolution, but that adds so much complexity i dont think any third party games save for maybe rockstar would bother coding to the metal.
You're watching too much WW. 😂
lmao. i didnt even think of that but thats perfect.
 

Toriko

Banned
Dec 29, 2017
7,711
If only Cerny had consulted AegonSnake before designing this architecture. Cerny would have realized early on that AI will freeze on PS5. :)

Goddammit Cerny! Too late now.
 

gofreak

Member
Oct 26, 2017
7,740
That profile is exactly what I am talking about. The profile will be 2.0 ghz or whatever the most consistent clocks are that dont cause the gpu to downgrade clocks.

I know devs already playtest, but how many times we have seen games this and the gen before and well every gen before that where devs dont do that kind of optimization, and simply let frames drop. We just saw it in games like Control where base consoles were dipping down to 10 fps. and this was on fixed clocks. they knew the limits of the system, they must have seen the issues during playtests for performance and they did not bother optimizing.

What Cerny is asking devs to do now is to tackle it on not just a scene by scene basis, but also frame by frame, and if they dont do that, the consequences are dire. they fixed clocks they relied on before, and now they will have no choice but to account for that. unless like you said, they create a profile with lower clocks and stick to that.

Under the model I was considering, the clock is not fixed. The profile is a tiebreaker the dev can choose when the machine can't decide the 'balance' to use e.g. in heavy cpu and heavy gpu scenarios. At the 'non-tricky' times, the clocks boost.

That model may be wrong, btw, but it's the closest way I've been able to reconcile what Cerny explained and what Alex at DF wrote.

Your idea of devs having to tweak every frame in the game if they let clocks 'float' is outlandish. They will optimise as they always do - run tests, find the bound, optimise for the bound. If the game runs fine there's nothing to do. For thorny optimisation problems, you may want to go 'frame by frame' - this is true on a fixed clock system too. The only difference on PS5 is that there's a range of give and take between cpu and gpu to account for in scenarios where the cpu and gpu are heavily contending for power draw. That actually has potential advantages ('two birds, one stone') as well as disadvantages (more co-op across cpu/gpu disciplines on a team perhaps).

Thats even worse. Games crashing just because the gpu throttled down clocks is going to be a nightmare.


Frametime varies in games already. All the way up to game-locking scenarios or bugs, while in development. Devs already have to look for blackspots and work them out.

If they don't, the game could run poorly, or even break a TRC and crash.

Devs will have a deterministic view of performance on PS5 - it's as testable, reproducible etc. as any other system. You're not going to have machine-breaking games passing through QA anymore than you already have.
 
Last edited:

gozu

Member
Oct 27, 2017
10,397
America
i think the ps5 gpu clock is load-dependent, devs will simply not bother coding for the worst case scenario and clock it much lower than 2.23 ghz to avoid instances where all of a sudden the gpu detects a major spike in the gpu or worse the cpu load, and decides to downclock it leaving the cpu starved of the clocks it needs to run crucial A.I, game logic and other physics related tasks that need to run on the cpu.

what cerny is asking devs to do is to go line by line through the code, scene by scene, firefight after firefight and playtest their game for the worst case scenario.

You jumped the shark on this :)

The way I understood it, CPU gets priority. Only if it has spare power does it pass it to the GPU. So the logic will never be starved as you implied. Problem solved. The GPU, as you said, can just use dynamic scaling.

Anyways, games are played to death during QA, so it doesn't matter what the flops are, devs don't use those for precise measurements. They use playtest sessions. Thousands of them.
 

AegonSnake

Banned
Oct 25, 2017
9,566
Under the model I was considering, the clock is not fixed. The profile is a tiebreaker the dev can choose when the machine can't decide the 'balance' to use e.g. in heavy cpu and heavy gpu scenarios. At the 'non-tricky' times, the clocks boost.

That model may be wrong, btw, but it's the closest way I've been able to reconcile what Cerny explained and what Alex at DF wrote.

Your idea of devs having to tweak every frame in the game if they let clocks 'float' is outlandish. They will optimise as they always do - run tests, find the bound, optimise for the bound. If the game runs fine there's nothing to do. For thorny optimisation problems, you may want to go 'frame by frame' - this is true on a fixed clock system too. The only difference on PS5 is that there's a range of give and take between cpu and gpu to account for in scenarios where the cpu and gpu are heavily contending for power draw. That actually has potential advantages ('two birds, one stone') as well as disadvantages (more co-op across cpu/gpu disciplines on a team perhaps).
Right, so under the worst case scenarios, the profile reverts to a lower clock to bring down the temps/power load which in turn forces the dev to make compromises. I am guessing resolution and framerate first before A.I, physics and destruction, but that creates race conditions like the one you described where both cpu and gpu are fighting for more power draw. What happens then? More work for devs and more resources to try and figure out where this happens. Where as on the series x, they just feed the gpu whatever graphics data it needs to render at a fixed resolution, and it drops to 45 fps like RE3make on X1x even though it runs at native 4k. they clearly didnt bother optimizing that game on a scene by scene, frame by frame basis now did they? i am sure that game runs at native 4k 60 fps locked when its just Jill running around an empty hall with no enemies chasing her.

The only difference with the PS5 is that devs will be forced to optimize on heavier loads, and then create profiles to adjust framerate and resolution on a scene by scene basis. or risk freezing like you said in an earlier post. What my contention is that they wont do it on a scene by scene basis, they will create one profile after their playtesting and call it a day. Just like they did with literally every single Xbox one x port that runs at native 4k but has worse framerates than the pro running at half the resolution.
 

lukeskymac

Banned
Oct 30, 2017
992
What Cerny is asking devs to do now is to tackle it on not just a scene by scene basis, but also frame by frame, and if they dont do that, the consequences are dire.

...he's asking the exact opposite of that. The whole point of this is that the same combinations of instructions always light up the APU consistently, so variations are deterministic. The framerate on games is not going to be any more variable than it already is on ""fixed"" clocks. If you implemented a function using AVX instructions X, Y, it's always going to downclock to the Z speed during them. There'll be no guesswork.
 

Cyborg

Banned
Oct 30, 2017
1,955
They made the PS5 for the developers. so no way in hell they will make it difficult to develop for. It will be easy to work with.
They just forgot about gamers with PS5.
joke, a lil bit
 

gofreak

Member
Oct 26, 2017
7,740
Right, so under the worst case scenarios, the profile reverts to a lower clock to bring down the temps/power load which in turn forces the dev to make compromises. I am guessing resolution and framerate first before A.I, physics and destruction, but that creates race conditions like the one you described where both cpu and gpu are fighting for more power draw. What happens then? More work for devs and more resources to try and figure out where this happens. Where as on the series x, they just feed the gpu whatever graphics data it needs to render at a fixed resolution, and it drops to 45 fps like RE3make on X1x even though it runs at native 4k. they clearly didnt bother optimizing that game on a scene by scene, frame by frame basis now did they? i am sure that game runs at native 4k 60 fps locked when its just Jill running around an empty hall with no enemies chasing her.

The only difference with the PS5 is that devs will be forced to optimize on heavier loads, and then create profiles to adjust framerate and resolution on a scene by scene basis. or risk freezing like you said in an earlier post. What my contention is that they wont do it on a scene by scene basis, they will create one profile after their playtesting and call it a day. Just like they did with literally every single Xbox one x port that runs at native 4k but has worse framerates than the pro running at half the resolution.


There are no race conditions ala software threading...just to clarify that. That has a whole other connotation

Devs do optimise games for the bound. They don't just throw it at the machine and call it a day. Or if they do, you're risking all the same 'catastrophes' you're talking about in a fixed clock system as a variable one.

There is nothing new with PS5, except that when you find a 'blackspot' for performance, of a particular kind, you could loosen or tighten a bound on one component by operating on the other.

The risk you're talking about - of a developer putting out a game blindly and creating problems in the wild that they were unaware of - is no more heightened with PS5 as any other machine. If a dev isn't testing for performance, the catastrophic risk you're talking about of crashing are present regardless.

You can be sure, with RE, that they absolutely did check for performance. It was just deemed acceptable (in the context of their resources/schedule etc), dips and all.

What you're talking about in a worst case - even and high load on both components, with no intervention - is that the game will run at the framerate (or resolution) allowed by the clocks the machine is running at in that scenario. That won't be a 'crash' - there'll be a range of some performance throttling, but it'll be a fraction of the total. And this is no different to any other machine - when you reach its limit, and you don't optimise, you're just running at its limit. If that means a frametime higher than the last scene you were looking at, your framerate goes down. This is how it happens on any machine.
 
Last edited:

gozu

Member
Oct 27, 2017
10,397
America
Personally, if I were Cerny and if I had a choice between doubling minimum SSD speed or increasing GPU perf by 15%, I would have picked the SSD speed. I was an early adopter of SSDs back in 2008 starting with Intel's 80GB offering and I remain as bullish as I ever was on them, with one nagging exception of bottlenecks.

Would any of you really have chosen a 15% over a 100% improvement between equally important bottlenecks ?
 

Timlot

Banned
Nov 27, 2019
359
Question. When Cerny says PS5 power draw will remain constant, but the frequency will vary. Does that mean,for example, during gameplay the power supply will run at a sustained lets say 200 watts? Maybe I'm misunderstanding, but that seems really inefficient to draw the same power regardless of cpu/gpu demand.

I was just watching some old DF videos where the PS4 Pro (which has a 310w power supply) would vary between 75w-165w during gameplay. Well below what the power supply is rated at. Wouldn't you wan't an electronic device to only use the electricity it needs?
 

tusharngf

Member
Oct 29, 2017
2,288
Lordran
Question. When Cerny says PS5 power draw will remain constant, but the frequency will vary. Does that mean,for example, during gameplay the power supply will run at a sustained lets say 200 watts? Maybe I'm misunderstanding, but that seems really inefficient to draw the same power regardless of cpu/gpu demand.

I was just watching some old DF videos where the PS4 Pro (which has a 310w power supply) would vary between 75w-165w during gameplay. Well below what the power supply is rated at. Wouldn't you wan't an electronic device to only use the electricity it needs?
Higher the frequency more the power draw unless they have done modifications to the apu but this is the universal law.
 

gofreak

Member
Oct 26, 2017
7,740
Question. When Cerny says PS5 power draw will remain constant, but the frequency will vary. Does that mean,for example, during gameplay the power supply will run at a sustained lets say 200 watts? Maybe I'm misunderstanding, but that seems really inefficient to draw the same power regardless of cpu/gpu demand.

The power draw still varies I think.

What's become fixed - constant - is the worst case power draw of the system. Not the current one at a given point in time. That, I think, can still vary with workload.

On PS4, by the sounds of it, the worst case power draw was a guesstimate. You might think it was the 310 watts of the PSU, but that was just an ample PSU to accommodate an unknown possible future case. Similarly the cooling system was designed for a guesstimated future worst case, not 310W of power at max load. It was either do that, or clock the system more conservatively.

For PS5, the worst case is fixed, and you don't fix your clocks ahead of time on an assumed worst case. You let them 'float', within some range at least, so what are hopefully more typical workloads can go faster than otherwise would be allowed, while the 'worst cases' are brought under a strict power control you can more confidently design your cooling and PSU for.

The conundrum when designing a system is a) how do i know this is the worst case thermal envelope my system will have to deal with? and b) how do i know some workloads couldn't get away with running faster? Sony's concluded you can't really know, so let it vary depending on workload rather than having your worst case thermals potentially go out of control with a high clock, or rather than clocking conservatively in a way that leaves performance on the table for many games.
 

androvsky

Member
Oct 27, 2017
3,519
Question. When Cerny says PS5 power draw will remain constant, but the frequency will vary. Does that mean,for example, during gameplay the power supply will run at a sustained lets say 200 watts? Maybe I'm misunderstanding, but that seems really inefficient to draw the same power regardless of cpu/gpu demand.

I was just watching some old DF videos where the PS4 Pro (which has a 310w power supply) would vary between 75w-165w during gameplay. Well below what the power supply is rated at. Wouldn't you wan't an electronic device to only use the electricity it needs?
It can't keep cranking up the clock past 2.23 GHz. What they're doing is flattening out the peak power draws, so instead of changing between 75w - 165w, it'd be more like 75w - 145w. If it's seriously constant, then it'd just be 75w all the time during gameplay (assuming same conditions as the DF video) if Sony's really aggressive about power and heat (unless it's sitting idle in the OS or something). They're obviously not forcing the APU to use electricity it doesn't need after going through all this with variable clocks, that's the exact opposite of what they've engineered.

A truly constant power draw like you're suggesting would require them to build an electric space heater that turns on when the APU load drops below a certain point.
 

Nachtmaer

Member
Oct 27, 2017
347
Question. When Cerny says PS5 power draw will remain constant, but the frequency will vary. Does that mean,for example, during gameplay the power supply will run at a sustained lets say 200 watts? Maybe I'm misunderstanding, but that seems really inefficient to draw the same power regardless of cpu/gpu demand.

I was just watching some old DF videos where the PS4 Pro (which has a 310w power supply) would vary between 75w-165w during gameplay. Well below what the power supply is rated at. Wouldn't you wan't an electronic device to only use the electricity it needs?
What he means is that there's a ceiling to how much power the SoC (and I assume the individual components as well) can draw and if a certain workload hits this, clocks will start to scale down to maintain that fixed cap.

This has nothing to do with the PSU. The PSU just provides however much power is requested from the console. They always leave some headroom so it doesn't overdraw when there's a possible spike and this way your PSU is also a bit more efficient.
 

amstradcpc

Member
Oct 27, 2017
1,771
Personally, if I were Cerny and if I had a choice between doubling minimum SSD speed or increasing GPU perf by 15%, I would have picked the SSD speed. I was an early adopter of SSDs back in 2008 starting with Intel's 80GB offering and I remain as bullish as I ever was on them, with one nagging exception of bottlenecks.

Would any of you really have chosen a 15% over a 100% improvement between equally important bottlenecks ?
Me not. In my desktop, not used for gaming, the most notorious upgrade was by far a Samsung EVO.
 

BradGrenz

Banned
Oct 27, 2017
1,507
That profile is exactly what I am talking about. The profile will be 2.0 ghz or whatever the most consistent clocks are that dont cause the gpu to downgrade clocks.

You don't seem to understand that there is no way to "profile to 2.0ghz". It's a completely nonsensical concept in the context of how the GPU in PS5 actually works. You additionally don't seem to understand what kinds of loads actually cause power draw to increase. Which is to say, none of your concerns have any basis in reality.
 

AegonSnake

Banned
Oct 25, 2017
9,566
You don't seem to understand that there is no way to "profile to 2.0ghz". It's a completely nonsensical concept in the context of how the GPU in PS5 actually works. You additionally don't seem to understand what kinds of loads actually cause power draw to increase. Which is to say, none of your concerns have any basis in reality.
Hi Mr. Cerny, Good to finally be able to talk to you. Can you please point me to the PS5 specifications where all of this is explained in detail.
Thanks,
Aegon

P.S Please forward any demos you might have where you have seen this downclock in action since you seem to know everything about the PS5 real game performance in detail, to the point where you are getting offended by folks who dare to speculate about this brand new tech on a message board.
 

jroc74

Member
Oct 27, 2017
29,046
The one obvious thing is that yes Sony wanted to get as much performance out of a 36CU GPU and they have obviously achieved that goal. Why did they go with 36CU's? Most likely with affordability in mind.
Agree. It's also looking like it was for BC too.

Whatever the case, they planned on using 36cu's from the get go, no matter what.

Personally, if I were Cerny and if I had a choice between doubling minimum SSD speed or increasing GPU perf by 15%, I would have picked the SSD speed. I was an early adopter of SSDs back in 2008 starting with Intel's 80GB offering and I remain as bullish as I ever was on them, with one nagging exception of bottlenecks.

Would any of you really have chosen a 15% over a 100% improvement between equally important bottlenecks ?
Nope, if it was a choice of either or, I'll take the SSD on steroids.

This is from someone who still thinks the difference between the base consoles were overblown. The difference didn't stop me from playing Recore and State of Decay on my One S. And I didn't lose any sleep getting a Pro instead of a One X.

What is making a difference for me is noise tho.....
 

AegonSnake

Banned
Oct 25, 2017
9,566
There was an hour long presentation last week. Look it up.
I did. Multiple times. And that's what I using to base my speculation just like everyone else including you.

A few days ago, everyone in this thread was saying how this is a brand new tech that has never been seen in PC before, and now you are acting like an expert trying to police and shutdown discussion just because you dont like where it is going.
 

KeRaSh

I left my heart on Atropos
Member
Oct 26, 2017
10,290
That profile is exactly what I am talking about. The profile will be 2.0 ghz or whatever the most consistent clocks are that dont cause the gpu to downgrade clocks.

I know devs already playtest, but how many times we have seen games this and the gen before and well every gen before that where devs dont do that kind of optimization, and simply let frames drop. We just saw it in games like Control where base consoles were dipping down to 10 fps. and this was on fixed clocks. they knew the limits of the system, they must have seen the issues during playtests for performance and they did not bother optimizing.

What Cerny is asking devs to do now is to tackle it on not just a scene by scene basis, but also frame by frame, and if they dont do that, the consequences are dire. they fixed clocks they relied on before, and now they will have no choice but to account for that. unless like you said, they create a profile with lower clocks and stick to that.
Would be a pretty weird move to base a huge part of their next gen strategy on actual dev feedback and further improve the "time to triangle" just to implement a system that causes a ton of extra work for devs.
I'm not a psychic but based on what Cerny said combined with actual dev feedback that said that they're extremely happy with the system, I'm pretty sure it's not as cumbersome as you are making it out to be.
 

AegonSnake

Banned
Oct 25, 2017
9,566
Agree. It's also looking like it was for BC too.

Whatever the case, they planned on using 36cu's from the get go, no matter what.


Nope, if it was a choice of either or, I'll take the SSD on steroids.

This is from someone who still thinks the difference between the base consoles were overblown. The difference didn't stop me from playing Recore and State of Decay on my One S. And I didn't lose any sleep getting a Pro instead of a One X.

What is making a difference for me is noise tho.....
giphy.gif


12 tflops + 5 gbps monster.

It seems Cerny's cooling solution can handle TDP even higher than the series x which makes them saving $20-30 on an APU even more disappointing. they could've given us a 13.3 tflops console at the same TDP had they gone wide and 2.0 ghz.
 

Lady Gaia

Member
Oct 27, 2017
2,483
Seattle
Thats even worse. Games crashing just because the gpu throttled down clocks is going to be a nightmare.

As disconnected from reality as many of your expressed concerns are, this really takes the cake. Code hasn't executed as a constant rate with fixed clocks for decades. It has been variable and subject to external influences like cache contention, memory bandwidth contention, and other factors besides for decades. Clocks that vary over time aren't a new thing, either, and even with related load-sensitive approaches where I have been involved with similar efforts it didn't lead to catastrophic results. There have been shipping products for years that are similar, if applied to different ends and in different industries. You are welcome to your fears, of course, but none of the doom and gloom you project is going to have any bearing on the reality of this console.
 

OnPorpoise

Avenger
Oct 25, 2017
1,300
It's legitimate to bring up the potential developer issues or being skeptical of how much the GPU will really stay at those higher frequencies.

Going the extra step of spitballing hypothetical nightmare scenarios for the PS5 that blue nugroho is likely to read and then retweet is where it all falls down.
 

AegonSnake

Banned
Oct 25, 2017
9,566
As disconnected from reality as many of your expressed concerns are, this really takes the cake. Code hasn't executed as a constant rate with fixed clocks for decades. It has been variable and subject to external influences like cache contention, memory bandwidth contention, and other factors besides for decades. Clocks that vary over time aren't a new thing, either, and even with related load-sensitive approaches where I have been involved with similar efforts it didn't lead to catastrophic results. There have been shipping products for years that are similar, if applied to different ends and in different industries. You are welcome to your fears, of course, but none of the doom and gloom you project is going to have any bearing on the reality of this console.
Surely if you followed the discussion, you would've noticed that was actually gofreak's suggestion at what could happen in a worst case scenario. Not my original suggestion.

Regardless, I am not saying the console will ship with games that crash constantly or that A.I will stop working, because i think devs will not take that risk. They will set a max clock and settle for that based on their testing to avoid instances where clocks have to be adjusted based on power draw. That is all. Everyone throttles, the question is will third party devs bother to optimize to make full use of the 10.2 tflops gpu or settle for less to ensure the game runs without issues.

We will see what happens at launch. The tflops difference between the two GPUs is only 18%. If 18% is the average resolution gap we see then clearly I was wrong, and no profile for a 2.0 or 2.1 ghz clock was needed. If the gap is in the 25-30% range then clearly the GPU wont always be able to sustain those max clocks 98% of the time like Cerny said and devs had to go with a set profile instead.
 

Lady Gaia

Member
Oct 27, 2017
2,483
Seattle
Regardless, I am not saying the console will ship with games that crash constantly or that A.I will stop working, because i think devs will not take that risk. They will set a max clock...

What evidence suggests that developers take any role in setting clocks? Nothing in The Road to PS5 suggests that this is the case. Instead, what has been described is an automatic system that makes changes as needed on the developer's behalf. There was no discussion of picking a profile or a clock speed, just a system that adapts to changing power consumption demands. In fact, it was quite explicitly the opposite of what you describe which is how the PS4 operated: the designers picked what clocks they thought could handle the worst case load, which meant that when running anything but the worst case it wasn't fully exploiting the hardware.
 

lukeskymac

Banned
Oct 30, 2017
992
Surely if you followed the discussion, you would've noticed that was actually gofreak's suggestion at what could happen in a worst case scenario. Not my original suggestion.

Regardless, I am not saying the console will ship with games that crash constantly or that A.I will stop working, because i think devs will not take that risk. They will set a max clock and settle for that based on their testing to avoid instances where clocks have to be adjusted based on power draw. That is all. Everyone throttles, the question is will third party devs bother to optimize to make full use of the 10.2 tflops gpu or settle for less to ensure the game runs without issues.

We will see what happens at launch. The tflops difference between the two GPUs is only 18%. If 18% is the average resolution gap we see then clearly I was wrong, and no profile for a 2.0 or 2.1 ghz clock was needed. If the gap is in the 25-30% range then clearly the GPU wont always be able to sustain those max clocks 98% of the time like Cerny said and devs had to go with a set profile instead.

LMAO if the resolution differences even reach 18% I'll be surprised.
 

AegonSnake

Banned
Oct 25, 2017
9,566
What evidence suggests that developers take any role in setting clocks? Nothing in The Road to PS5 suggests that this is the case. Instead, what has been described is an automatic system that makes changes as needed on the developer's behalf. There was no discussion of picking a profile or a clock speed, just a system that adapts to changing power consumption demands. In fact, it was quite explicitly the opposite of what you describe which is how the PS4 operated: the designers picked what clocks they thought could handle the worst case load, which meant that when running anything but the worst case it wasn't fully exploiting the hardware.
This following bit is literally the only real explanation we have on how this thing works. Just a couple of sentences by Cerny.



He says we look at the activities the cpu and gpu are performing and set the frequencies on that basis. This could mean one of two things, either the PS5 is setting the frequency based on whatever instructions are being passed to the cpu and gpu, and devs simply have no idea what frequency to target at any given point, or devs have full access to what is happening behind the scenes and they get to see which activities increase the power usage and can work on optimizing code based on impact these activites have on the gpu power and thus the gpu/cpu frequency.

i dont know what's worse. option 1 or option 2. option 1, devs have no idea what to target since frequency is ever changing. or option 2 where devs have to micromanage, and throttle the activity to ensure the ps5 doesnt need to throttle the frequency.
 
Mar 22, 2020
87
I've not been on the thread for a while, I've seen many people starting pretty rough conversations and getting (rightfully) banned for it. We apparently drifted and ended up talking purely clocks, so I wanted to make sure I address a couple of things said earlier that cause a lot of tumble now.

I mean, yes it does. It's inefficient. They're pushing over the efficiency of their GPU design.
[...]
What traditional boost ?
Where's the smart design in pushing clocks higher than the efficiency point ?
[...]
Then again, it's fine to not care about power draw. I didn't make it as a bad point. Just that it's not a smart design as some people claims. There's nothing smart to push higher clocks to the point of inefficiency. It almost sounds like an afterthought.
We actually don't know where lies the efficient, linear part of the curve for those new RDNA2 GPUs, since the new process node and AMD claim on clocking circuitry improvements, we know there should be a larger threshold before we see poor returns on clock frequency. Sony didn't only push higher clocks, it's also the XSX chip that definitely can't reach too high, and generally drives more power anyway, so they had headroom.

The 3.2GHz cores were not conservative for their time unless you're comparing those devices with pretty pricy alternatives. The fact that power scales exponentially with clock is not some big secret and there is no magic sweet spot where this isn't true. At lower clock speeds it's possible to keep a constant voltage and merely ride a power of two curve, but as Intel has noted with current leakage and other effects alongside higher frequencies it's getting a lot closer to a power of three. Yes, you can find a "sweet spot" where you're closer to scaling with the square of the clock speed, but that still means that you get less than 5% performance increase for 10% additional power. Once it reaches cube scaling it's closer to 3% increase for each 10% in additional power.

This isn't new. We've been pushing clock speeds as high as we can get away with forever, because it keeps die sizes and therefore manufacturing costs down for the performance levels.
I wouldn't say it's exponential, the formula for dynamic power would be something like ~ C * V² * f (C is capacitance, V the supply voltage and f the clock frequency the design runs at). Capacitance more or less stays the same, but you'd need to increase voltage to push higher clocks, and at some point yes you could see exponential voltage requirements to push clocks (you also probably completely need to remove the thermal headroom to see this).
It depends what the V/f curve looks like for the new AMD parts, on average TSMC reports N7P on average lower isofrequency power by 7%. By design, the XSX GPU should use marginally higher voltages and leak more power. Temperature also tends to raise leakage power, hence I hope they cautiously fine tuned for low temperatures (<80°C would be nice).

My understanding is thats how it works on the XSX as well, if you access the faster or slower pool you lock access to the entire bus from any other device from accessing it for the period of the memory transaction. The difference being that the memory transactions will take the same amount of time on the PS5 regardless of where it is but the majority of the memory access by the CPU will likely take longer on the XSX.
I think they mentionned using 13.5GB on games, 6GB being part of the slower pool in 16GB. 6 out of 10 memory controllers are linked to the smaller pools, it does make quite a lot of conflict accessing at least 12GB though.

There is no large GPU power difference between 2 consoles. It's smallest that has ever been and resolution difference will absolute be the only tangible delta you are going to (most probably not) notice.

While CPU cache is insanely fast, it's also much, much smaller than the unified GDDR6 memory pool. Seems to me like each would allow for different approaches, meaning techniques optimized for the console architecture may not be able to translate to the PC's advantages.
Remember the goal of the cache is not to replace a faster pool of memory, it can effectively store a "very" small fraction of assets and hide larger latencies when accessing VRAM. I think the I/O ASICs would be an interesting addition to further generations of desktop CPUs, however I don't think separate pools of faster or slower memory require that much rework ?

Clocks are some of the "easiest" parts of a GPU/CPU to adjust.
Definitely untrue, especially when you consider +100/200MHz is a big deal for high end GPUs. In the case Sony asks that much of a frequency hike, it's unlikely AMD and TSMC could supply enough chips that pass the 2.2GHz threshold. Binning would not allow it.

performance scales linearly with clocks. its the temperatures that do not.
those in game results are off because 5700xt is bad at rendering games at 4k. maybe its the bandwidth or some other limitation elsewhere. a better test would be to test games at 1440p.
the firestrike scores scale linearly as you increase clocks.
I'm afraid it's not a great way to test, firestrike score is not a measure of performance. When raising clocks, you also see diminishing returns because your clocks can be on average stable, but cause performance regressions when faster calculations also result in errors. This also appears quite significantly when overclocking memory clocks. I don't think the "5700xt is bad at rendering 4K games", a GPU with more CUs usually shows higher differences when the resolution increases, because the size of the problem increases and lower frequencies become less of an issue. It's confirmed on both RDNA, Turing when gaming at 1080p/1440p and 4K. Using more games makes for a better figure, also.

but they dont. this kind of variable clock logic have never been used in PCs, let alone consoles. That's what everyone has been saying. it is NOT like what we have seen in games before. PCs dont downclock based on load, they just drop frames and run hotter. the way cerny described it, he clearly says its based on activity.
I don't necessarily agree, Intel, AMD and Nvidia use boost algorithms clearly based on load, temperature and power, and AMD firmware made for consoles is not unique to those.
 

Pheonix

Banned
Dec 14, 2018
5,990
St Kitts
This discussion about the GPUs clocks cant be the reaction Sony was expecting. Another thing they should clarify asap.
I agree. Like I couldn't agree more. If they can see there is obviously some confusion with what they have said they should clear it up and put it to rest. I mean how sony is dealing with the PS5 and how MS is dealing with the series X couldn't be more different. Embarrassingly different even.

Just a simple google search for the series X gives you this as the first link.
www.xbox.com

Xbox Series X | Xbox

Discover the fastest, most powerful Xbox ever with the Xbox Series X.

Doing the same for the PS5 you get this.
www.playstation.com

PlayStation®5 | Play Has No Limits | PlayStation

Everything you need to know about the PlayStation®5 console and PlayStation®5 Digital Edition - the best PS5 games, PS5 accessories, and introducing the DualSense wireless controller.

You have to specify PSblog in your search to even get this... which in itself is a joke.
blog.us.playstation.com

Unveiling New Details of PlayStation 5: Hardware Technical Specs [UPDATED]

Watch live for a deep dive into PS5's system architecture and how it will shape the future of games.


Hence Sony calling it "continuous boost". Is this a console bubble thing that is causing so many to struggle with this? The precedent for this usage of the term on PC GPUs has a fairly long history. A quick search turned up this Nvidia article from 2014 - https://devblogs.nvidia.com/increase-performance-gpu-boost-k80-autoboost/
I don't know man... just doesn't make sense to me. From its inception, my understanding of a boost clock has ben pretty simple. Using examples..
Chip has a minimum clock = 500Mhz
Chip has a normal clock = 1500Mhz
Chip has a boost clock = 1800Mhz

As long as the system above is working within its thermal limits it would work in the normal clock range, but if there is some thermal headroom can boost to its boost clock for a brief period of time. What makes it a boost clock s that its temporary, and it's going up forms "normal" clock.

With the PS5 what we have is:
APU has a normal clock = 2.2Ghz
APU has a load/power-dependent throttle clock = 2.1Ghz

It doesn't "go up"/"boost" from some arbitrary lower clock to that 2.2Ghz, but rather stays at 2.2Ghz all the time with exception to very specific and rare circumstances.

How these two approaches are the same thing is beyond me.

i think the ps5 gpu clock is load-dependent, devs will simply not bother coding for the worst case scenario and clock it much lower than 2.23 ghz to avoid instances where all of a sudden the gpu detects a major spike in the gpu or worse the cpu load, and decides to downclock it leaving the cpu starved of the clocks it needs to run crucial A.I, game logic and other physics related tasks that need to run on the cpu.

what cerny is asking devs to do is to go line by line through the code, scene by scene, firefight after firefight and playtest their game for the worst case scenario just in case the cpu clocks drop and all of a sudden, enemy a.i, npc a.i and other logic starts to lag because the system simply couldnt handle it. for graphics related tasks, devs might just implement dynamic scaling but again because the apu is throttled by activity, the devs will have to look for which part of the code is causing spikes. And then devs would need to lower clocks based on clock downgrades every time.

i highly doubt many third party devs would bother coding for worst case scenarios. they will clock the gpu at 2.0 ghz or whatever runs most stable without fluctuations and call it a day.
I disagree. The PS5 is literally designed in a way to circumvent this. And all that any dev needs to do is enable a variable resolution. Which i expect every dev to be doing now anyway.
 
Last edited:
Nov 2, 2017
2,275

D BATCH

Member
Nov 15, 2017
148
What sort of worst case scenario differences do you think ports will see?

For example what resolution and framerate on PS5 VS XSX?

AND Last - do you see any advantages the PS5 ssd has outside loading?
Missed other questions sorry. Think port will be just a res difference and FPS stability. RT on 3rd party games should also perform better on XBSX. DXR 1.1, VRS more CU's etc
 

Lady Gaia

Member
Oct 27, 2017
2,483
Seattle
This could mean one of two things, either the PS5 is setting the frequency based on whatever instructions are being passed to the cpu and gpu, and devs simply have no idea what frequency to target at any given point, or devs have full access to what is happening behind the scenes and they get to see which activities increase the power usage and can work on optimizing code based on impact these activites have on the gpu power and thus the gpu/cpu frequency.

i dont know what's worse. option 1 or option 2. option 1, devs have no idea what to target since frequency is ever changing. or option 2 where devs have to micromanage, and throttle the activity to ensure the ps5 doesnt need to throttle the frequency.

Developers already profile code to find the most efficient way to get work done by successive refinement. They already have to deal with the reality that instructions don't execute in trivially predictable amounts of time, as that's just a reality of modern processor architecture and has been for a very long time. Nothing changes in their workflow. Whether it makes you nervous or not is inconsequential, because the people doing this work aren't going to be negatively impacted. It doesn't make their job any more complicated than it is already, because it's axiomatic: you gauge performance by testing your code running in real-world conditions, not based on a simplified theoretical model of how it ought to behave.

Again: I've been doing this kind of work for decades, have worked on substantially similar systems, and have led teams that built profiling tools used to build software you may already use every day. You're creating imaginary problems, and at some point the only possible conclusion is that it's your objective.