This is my understanding as well. Plan to create a transcript tomorrow.My understanding, from what Cerny was saying, is that the level of power consumed by the chip isn't merely a function of clockspeed, but also dependent on what is actually being processed. That certain instructions, certain types of work, consume more power, independent of the clockspeed it is running at.
Hence, you could have times where both the CPU and GPU are, effectively, running at their max clocks, and other times where it has to vary.
Usually clocks are varied with thermals. Or cooling is varied with thermals. The latter is what happens in console often - even though, in PS4, the clockspeeds were fixed, the power consumption varied with workload, and thus the thermals - which is why your fan kicks up more in some games or at some times in some games, than in others.
PS5 is looking at the workload it's currently processing, what it will mean for power consumption, and if the workloads' power consumption at max clocks comes in under the PS5's power and thermal dissipation budget, then max clocks will be maintained. Cerny, it seems, expects that 'most' workloads will fit that criteria.
Their alternative was to try to guess the worst case workloads, and then always have the chip running at a lower fixed clockspeed to accommodate that. That's what they did with their consoles previously. Why do that, why fix around the assumed worst case, if the workload and its power demands will actually change, and in some, or perhaps many cases you could get away with a higher clock? That was the question they appear to have started with.
So, just to be clear, the power supply of the PS5 cannot handle 3.5GHz CPU clock and 2.23GHz GPU clock concurrently? Ever? That being the case, I'm trying to figure out why he would say that clock speeds decrease specifically in "worst case games."Nothing I typed changes anything about what was presented at the talk - which is just as valid as it was said there. One component uses more Power when the other one is not using it, same thing that was presented there. Both cannot Max the Power Budget, as was also said there.
I'll have to rewatch that section of the presentation but I'm pretty sure it can maintain those speeds. If someone can get there before I get the opportunity and quote what was said, that'd be great.
So all this seems to depend what happening in the game at certain times. But if the game needs max power from both gpu and cpu ( that is worse case), it's drops couple of percent. That's how I understand it.So, just to be clear, the power supply of the PS5 cannot handle 3.5GHz CPU clock and 2.23GHz GPU clock concurrently? Ever? That being the case, I'm trying to figure out why he would say that clock speeds decrease specifically in "worst case games."
That's an excellent explanation of what's actually happening inside the PS5, and why Cerny/Sony went the way they did. There's really no point lowering clock speeds and having them fixed instead of letting them run higher most of the time and only lowering them when required.My understanding, from what Cerny was saying, is that the level of power consumed by the chip isn't merely a function of clockspeed, but also dependent on what is actually being processed. That certain instructions, certain types of work, consume more power, independent of the clockspeed it is running at.
Hence, you could have times where both the CPU and GPU are, effectively, running at their max clocks, and other times where it has to vary.
Usually clocks are varied with thermals. Or cooling is varied with thermals. The latter is what happens in console often - even though, in PS4, the clockspeeds were fixed, the power consumption varied with workload, and thus the thermals - which is why your fan kicks up more in some games or at some times in some games, than in others.
PS5 is looking at the workload it's currently processing, what it will mean for power consumption, and if the workloads' power consumption at max clocks comes in under the PS5's power and thermal dissipation budget, then max clocks will be maintained. Cerny, it seems, expects that 'most' workloads will fit that criteria.
Their alternative was to try to guess the worst case workloads, and then always have the chip running at a lower fixed clockspeed to accommodate that. That's what they did with their consoles previously. Why do that, why fix around the assumed worst case, if the workload and its power demands will actually change, and in some, or perhaps many cases you could get away with a higher clock? That was the question they appear to have started with.
So I went back and re-watched that section. Here's what Cerny said:
"That doesn't mean all games will be running at 2.23GHz and 3.5GHz. When that worst-case game arrives, it will run at a lower clock speed but not too much lower. To reduce power by 10%, it only takes a couple of percent reduction in frequency. So I'd expect any down-clocking to be pretty minor."
Right before this quote, he said both CPU and GPU will run at those max clocks most of the time. I'm taking this to mean that both CPU and GPU can run at those max frequencies when needed, but if they're needed simultaneously then the frequencies for one or both would drop by a few percent. There's some debate about this in the other thread.
RT is done on RT/CU cores. All the calculation is done on those. SSD is just data transfer etc. Its why many suggest the Series X will be better at RT overall but without solid examples on both sides, its still up in the air as to which one will be better. Both are taking a different approach.How does the SSD effects RT? I'm really into this next gen feature and would like to know which platform can produce, in theory, better RT.
So SSD is not really a factor how good RT will look. Cz PS5 has less CUs does it mean that a Dev can make more an effective use of them and reach quicker the potential of the PS5 RT capabilities?RT is done on RT/CU cores. All the calculation is done on those. SSD is just data transfer etc. Its why many suggest the Series X will be better at RT overall but without solid examples on both sides, its still up in the air as to which one will be better. Both are taking a different approach.
Its more complicated than that, since there are so many factors that will come into play, like RAM speed, bandwidth, CPU usage etc. On paper the Series X should be better but without RT samples on both sides, it's hard to say. Less CU would generally mean less horsepower to generate RT, which would lead to lesser implementation of RT. RT, even on PC's, really needs lots of grunt to push it well and with lesser performance hits,So SSD is not really a factor how good RT will look. Cz PS5 has less CUs does it mean that a Dev can make more an effective use of them and reach quicker the potential of the PS5 RT capabilities?
Forgive me if I'm wrong but to my understanding RT is a computationally hard problem. Not hard in CS way as in time complexity and NP hard but you have to do a lot of calculations to do RT. Like really really a lot. Yes, compute units in modern computers are often limited because of io wait, no doubt about that, but both consoles do a lot to mitigate the io wait problems. But in the end you calculate a lot for RT, and if I'm not completely wrong, especially a lot in parallel.Its more complicated than that, since there are so many factors that will come into play, like RAM speed, bandwidth, CPU usage etc. On paper the Series X should be better but without RT samples on both sides, it's hard to say. Less CU would generally mean less horsepower to generate RT, which would lead to lesser implementation of RT. RT, even on PC's, really needs lots of grunt to push it well and with lesser performance hits,
Again, we have to wait to see what Sony does with RT. I am sure they will have examples at some point.
Agreed. Its not a simple thing to do but that is why the PC space had RT cores. How that works for Series X and PS5 with their different approaches is yet to be seen. Minecraft has been the main gaming RT/Path Tracing stuff we have seen for MS. Nothing on the Sony front but I am sure its there. Obviously nobody wants RT if it has a major hit on performance/ or if the RT is of lesser quality.Forgive me if I'm wrong but to my understanding RT is a computationally hard problem. Not hard in CS way as in time complexity and NP hard but you have to do a lot of calculations to do RT. Like really really a lot. Yes, compute units in modern computers are often limited because of io wait, no doubt about that, but both consoles do a lot to mitigate the io wait problems. But in the end you calculate a lot for RT, and if I'm not completely wrong, especially a lot in parallel.
Series X will be better at RT. Better GPU with more shading Power, more intersection Tests, and more bandwidth = better RT performance.RT is done on RT/CU cores. All the calculation is done on those. SSD is just data transfer etc. Its why many suggest the Series X will be better at RT overall but without solid examples on both sides, its still up in the air as to which one will be better. Both are taking a different approach.
Dictator Curious: are you expecting fully ray traced console games coming generation? With ray traced audio and levels maxed out with reflections and realistic bouncing light and whatnot? And what would be the difference between the 2 in that case. Will SX have more detailed reflections or something?
I may be in the minority here, but I don't really find today's ray tracing demo's that impressive. Sure, I see what it does, but it's typically something you totally forget after 5 minutes of playing and don't even notice when you're on the heat of the game. HDR does more for me tbh.
If you design for RT from the start, you can't just turn it off because then you have to do lighting design for a full raster pipeline. There might be a lower precision RT toggle, as we seen on PC, but any dev who completely jumps in the pool (like Metro's 4A Games) ain't getting out.I wonder if next gen, we will have performance mode for high framerate enthusiasts, and RT mode at a lowered resolutions (720/1080p) for 'better visuals ' lovers.
The minecraft demo is fully path traced. It's more of an experiment, really. There are a whole host of other ray tracing effects that can, and will, be implemented next gen.I mean Xbox showed off Minecraft and even that only runs at 1080p and below 60fps on the series x when full ray tracing was applied. Sure it was a "fast" job but it shows how much power actually is needed and expecting AAA graphics and full ray tracing unless some MAJOR breakthrough comes along sound improbable and with Lockhart seemingly being real it kind of put the idea of games build around only ray tracing more or less dead unless they create two completely different versions of the same game (or the Lockhart version runs <720p) when it comes to lighting for the rest of this gen.
I believe they said it was 60 fps. Maybe not the capture itself, but when they were showcased the demo, it ran 60 fps.I mean Xbox showed off Minecraft and even that only runs at 1080p and below 60fps on the series x when full ray tracing was applied. Sure it was a "fast" job but it shows how much power actually is needed and expecting AAA graphics and full ray tracing unless some MAJOR breakthrough comes along sound improbable and with Lockhart seemingly being real it kind of put the idea of games build around only ray tracing more or less dead unless they create two completely different versions of the same game (or the Lockhart version runs <720p) when it comes to lighting for the rest of this gen.
The minecraft demo is fully path traced. It's more of an experiment, really. There are a whole host of other ray tracing effects that can, and will, be implemented next gen.
Thanks, didn't know that. Interesting if Lockhart comes into scene with very limited rt capabilities. Then devs should have two very different versions of the game if understood correctly, or very limited rt mode.If you design for RT from the start, you can't just turn it off because then you have to do lighting design for a full raster pipeline. There might be a lower precision RT toggle, as we seen on PC, but any dev who completely jumps in the pool (like Metro's 4A Games) ain't getting out.
Besides there will be things like upscaling and dynamic shading rates to claw back performance
Hopefully, just the addition of SSD will sort that issue out but its also some smart programming for the RAM/GPU.
It can.So, just to be clear, the power supply of the PS5 cannot handle 3.5GHz CPU clock and 2.23GHz GPU clock concurrently? Ever? That being the case, I'm trying to figure out why he would say that clock speeds decrease specifically in "worst case games."
Is PS5 going to have noticeably worse RT though if generally it's running games at a slightly lower dynamic resolution than the same game on Xbox? RT scales linearly with resolution pretty much, so if PS5 is pushing less resolution then it's also pushing less rays needed for equivalent RT visuals. I cannot honestly see 3rd party PS5 RT games looking massively different from XSX RT games apart from the resolution.It's clear XBOX Series X has the power for outstanding RT on consoles but I'm still really bloody excited about how Polyphony Digital's implementation will end-up looking like. I'm sure Kaz's team, as a bleeding edge visual team, will have a treat lined up for PlayStation 5 owners.
Is PS5 going to have noticeably worse RT though if generally it's running games at a slightly lower dynamic resolution than the same game on Xbox? RT scales linearly with resolution pretty much, so if PS5 is pushing less resolution then it's also pushing less rays needed for equivalent RT visuals. I cannot honestly see 3rd party PS5 RT games looking massively different from XSX RT games apart from the resolution.
To my knowledge every part of the GPU improves with an increase in clock, which should mean that RT scaling should end up pretty similar to the scaling of overall performance?Does RT scale linerally with clock speed then? Haven't seen that confirmed myself.
It was not 60 fps, we explicitly state that... It is also why the downcoded to 30 fps Video stutters.I believe they said it was 60 fps. Maybe not the capture itself, but when they were showcased the demo, it ran 60 fps.
Series X will be better at RT. Better GPU with more shading Power, more intersection Tests, and more bandwidth = better RT performance.
This is common sense stuff.
They are both using the AMD approach, and unless AMDs Hardware RT scales negatively, XSX will have better RT performance.
Series X will be better at RT. Better GPU with more shading Power, more intersection Tests, and more bandwidth = better RT performance.
This is common sense stuff.
They are both using the AMD approach, and unless AMDs Hardware RT scales negatively, XSX will have better RT performance.
It can, if the instructions/workload being run through both are not power (electricity) hungry.So, just to be clear, the power supply of the PS5 cannot handle 3.5GHz CPU clock and 2.23GHz GPU clock concurrently? Ever? That being the case, I'm trying to figure out why he would say that clock speeds decrease specifically in "worst case games."
What do you mean? RT is so horribly bandwidth intensive and will probably be even more so on these New consoles. Bandwidth for Real time things. Being bandwidth limited and going from from hundreds of gb/s in memory to the pitifully slow ssd in comparison would be a disaster.Not that I disagree with you (I don't), but there is one other factor that could come into play looking beyond Minecraft, available memory for BVH trees. And in the case of offline constructed BVH trees, SSD bandwidth.
Unrelated to this, but when can we except the more in depth talk about Ps5 with the stuff that Cerny shared with you?What do you mean? RT is so horribly bandwidth intensive and will probably be even more so on these New consoles. Bandwidth for Real time things. Being bandwidth limited and going from from hundreds of gb/s in memory to the pitifully slow ssd in comparison would be a disaster.
What may save memory for RT will be mesh shaders doing some great New culling and, for bandwidth, RT per instance LOD.
To my knowledge every part of the GPU improves with an increase in clock, which should mean that RT scaling should end up pretty similar to the scaling of overall performance?
Is PS5 going to have noticeably worse RT though if generally it's running games at a slightly lower dynamic resolution than the same game on Xbox? RT scales linearly with resolution pretty much, so if PS5 is pushing less resolution then it's also pushing less rays needed for equivalent RT visuals. I cannot honestly see 3rd party PS5 RT games looking massively different from XSX RT games apart from the resolution.
What do you mean? RT is so horribly bandwidth intensive and will probably be even more so on these New consoles. Bandwidth for Real time things. Being bandwidth limited and going from from hundreds of gb/s in memory to the pitifully slow ssd in comparison would be a disaster.
How do you comment on this from Cerny:Series X will be better at RT. Better GPU with more shading Power, more intersection Tests, and more bandwidth = better RT performance.
This is common sense stuff.
They are both using the AMD approach, and unless AMDs Hardware RT scales negatively, XSX will have better RT performance.
I have some feeling that ram is the real Achilles heel of both of thee consoles this gen.What do you mean? RT is so horribly bandwidth intensive and will probably be even more so on these New consoles. Bandwidth for Real time things. Being bandwidth limited and going from from hundreds of gb/s in memory to the pitifully slow ssd in comparison would be a disaster.
What may save memory for RT will be mesh shaders doing some great New culling and, for bandwidth, RT per instance LOD.
But my question was does RT improve linearly with the clock speed? I imagine it does, I was just looking for confirmation.
What do you mean? RT is so horribly bandwidth intensive and will probably be even more so on these New consoles. Bandwidth for Real time things. Being bandwidth limited and going from from hundreds of gb/s in memory to the pitifully slow ssd in comparison would be a disaster.
What may save memory for RT will be mesh shaders doing some great New culling and, for bandwidth, RT per instance LOD.
I don't think that's true. Didn't Cerny even mention an example of something that didn't improve? Can't remember the feature now.
But my question was does RT improve linearly with the clock speed? I imagine it does, I was just looking for confirmation.
I think cerny said you get hit on system memory in relation to the gpu. Something like the memory is 33% further away in terms of cycles. Someone else would have to explain what it means but cerny seems to think the hit is worth it.