I wouldn't want that regardless of the virus. big-stage shows are becoming passe, I feel. a video and a journalist venue for hands-on is good enough for me
Cerny's use of the word 'potentially' is concerning here. But we'll see in due time.
Dark1x Can you confirm this?
So PS5 variable GPU and CPU speeds means running a very low game or app will make both run at much lower clocks so less risk of overheating when running small games or areas or apps while XSX will run evverything at fixed clocks all the time regardless of the power needed?
Isn't this kinda risky for XSX to run at such heat all the time, even if the cooling system is good enough? This may wear off components much faster.
That's my point he's claiming, theorizing. Just say we have a system that can do x and be done with it. Using peaks that aren't possible to be operating simultaneously is just causing confusion. Their plan is to confuse until release, giving hope to fans there is some secret sauce to make up a performance gap.Not really. It is fairly clear what Cerny is claiming, just need to see how it actually works with software.
sorry for being so dense, but doesn't this mean that they saved some information form their talking with cerny just to make a second video after their previous kinda vague analysis??
Yes. Smart shift is different than variable frequency. Smartshift gives some power from CPU (without downlocking it) to GPU when CPU is not fully used.Okay so the PS5 is truly going in hard with the 3D Audio. I wonder how the official PS5 headphones will be optimized for it.
This is also interesting:
So unless I am reading this wrong the PS5 using the CPU at 3.5GHz isn't even using its full power budget. There is no reason for either to not to run at near max levels.
Context, I don't know her......and in the video, richard immediately follows thats by saying "The fixed profiles that devs are talking about are only used in ps5 dev kits and in retail games their code will tap into the variable clocks for extra performance"
its important to have the entire context.
"has not" as in Sony hasn't said anything about it, not that the PS5 doesn't have it.Also, PS5 will not have VRS... or i misinterpreted the article?
Quote:
And at the nuts and bolts level, there are still some lingering question marks. Both Sony and AMD have confirmed that PlayStation 5 uses a custom RDNA 2-based graphics core, but the recent DirectX 12 Ultimate reveal saw AMD confirm features that Sony has not, including variable rate shading.
"GPUs process hundreds or even thousands of wavefronts; the Tempest engine supports two," explains Mark Cerny. "One wavefront is for the 3D audio and other system functionality, and one is for the game. Bandwidth-wise, the Tempest engine can use over 20GB/s, but we have to be a little careful because we don't want the audio to take a notch out of the graphics processing"
I was theorizing earlier today that the Tempest engine was actually one of the 4 idle CU's on the APU and it was utilizing the second 20GBps Onion bus (same as on PS4) for bypassing the cache and not having to share bandwidth with the other CU's. This seems to confirm that is the case.
The Onion bus (as it is called on PS4) is utilized for asynchronous compute needs hence the "take a notch out of the graphics processing" mention by Cerny I'm sure.
From the other thread:
There's details in the article. I found it odd they didn't talk about it in the video though.Interesting nothing around the SSD and the I/O stack in this video considering it was like half of the Cerny presentation.
Their plan is to confuse until release, giving hope to fans there is some secret sauce to make up a performance gap.
I don't see how this would wear out components any faster. This really isn't an issue with modern processors. It's not something could or should be used as a point of argument in any discussion, I'd say.
Of current machines, only Switch is a concern for me as it won't boot without some charge in the battery which means long-term usage will always require a functional battery.
Your current consoles dont have a change of frequency they run at a fixed point and they seem to have managed fine its more the norm and if as we expect the new gen is similar to now it will be only be 7 years.
Agreed. My 9900k can run at balanced (downclock at idle) or full power modes. Despite the processor being clocked at the highest level on one of them the load on it being minimal meaning it is not actually running full tilt the whole time. Long story short; Full clocks are not full load.I don't see how this would wear out components any faster. This really isn't an issue with modern processors. It's not something could or should be used as a point of argument in any discussion, I'd say.
Of current machines, only Switch is a concern for me as it won't boot without some charge in the battery which means long-term usage will always require a functional battery.
I was just asking. I thought Cerny presented variable clocks as an advantage like this. Then why did he choose variable clocks if fixed clocks don't represent any problem for the hardware then?
That's my point he's claiming, theorizing. Just say we have a system that can do x and be done with it. Using peaks that aren't possible to be operating simultaneously is just causing confusion. Their plan is to confuse until release, giving hope to fans there is some secret sauce to make up a performance gap.
No because their plan is actually working.
I think they're pushing the GPU clocks rather high and at the top end and this is a good way to get there when needed.I was just asking. I thought Cerny presented variable clocks as an advantage like this. Then why did he choose variable clocks if fixed clocks don't represent any problem for the hardware then? I mean the CPU clock speed is already low enough so i don't see why would have to downclock it to make more room for GPU clock. At least CPU clock should have been locked imo.
God how many pages had people arguing about this? Even though Cerny said in the video itself that he believes it will run at those clocks most of the time.
I think they're pushing the GPU clocks rather high and at the top end and this is a good way to get there when needed.
...but really, running full clocks doesn't mean full load. It's not the same as utilization. So it'll have no impact on longevity.
So, I suppose the question is, as we progress further in to the next generation and games are pushing the GPU and CPU much harder than currently, with intensive workloads of advanced graphical effects and ray-tracing, and more CPU utilisation for physics and AI etc, will "most workloads" still be sustaining peak clock? Or as developers move away from cross-gen development to exclusively developing for next-gen, are they going to have to make choices more often between prioritising CPU or GPU workloads?
Well...
Several developers speaking to Digital Foundry have stated that their current PS5 work sees them throttling back the CPU in order to ensure a sustained 2.23GHz clock on the graphics core.
Napkin maths: Assuming you're using the compression and can sustain 9Gb/sec, and that decompression is instant (not true, even for dedicated hardware), you're looking at 90Mb for a 10ms burst. The PS5's SSD is fast, but not that fast.So this is a little bit new. An order of milliseconds to get data through the I/O stack. That's pretty exciting because it does open up the possibility of on-demand, intra-frame requesting of some kinds of data.
I wonder how much you could get into memory within, say, 10ms...
Is it really trolling though? It seems everyone except sony is trying to explain the ps5 and it's capabilities.
So am I missing something or why did the video not mention basically the SSD at all? I mean if there is one thing that sticks out it's the SSD in the PS5 and it was probably what Cerny spend most of his time on during the presentation.
and in the video, richard immediately follows thats by saying "The fixed profiles that devs are talking about are only used in ps5 dev kits and in retail games their code will tap into the variable clocks for extra performance"
its important to have the entire context.
Incremental performance improvements?Variable rate shading, or its functional equivalent, is an important optimisation method for virtual reality uses. It might even be in place already for PSVR. Either way, assuming Sony is still committed to VR, I'd be very surprised if the PS5 doesn't facilitate some version of the same basic principle, regardless of whether or not PS5 and XSX share the exact same implementation in silicon.
VRS is massively overhyped in non-VR purposes anyway. It's a handy optimisation that can deliver decent incremental performances improvements in many situations. Compared to checkerboarding, temporal interpolation, machine learning interpolation etc., it's not that significant. I thought it was weird as hell that when MS revealed the XSX specs that they chose to make VRS the second bullet point after "12 TF". I mean, MS gave VRS hype priority over the CPU, SSD, or raytracing. That's just random as hell IMHO.
Yes DF, the entire crew, are awesome. everyone join their Patreon!!I could listen to Rich talk for hours. Hes so good at this man.
This is what Cerny meant when he said they were testing top 100 PS4 games with the boosted clock.
No but the number one reason is most likely cost and maybe BC.
Seems like a megaboost. What's the source of this image. Just want to read about it.Incremental performance improvements?
Sure if you're talking about VRS Tier 1. But VRS Tier 2 - only currently supported by Nvidia's Turing cards, and confirmed for XSX - is a whole different ball game.
It makes a significant difference to performance in addition to the other features you mentioned. And has less visual degradation impact. I've no doubt PS5 will support its own version of this as it's likely a standard part of RDNA 2.0 architecture.
Seems like a megaboost. What's the source of this image. Just want to read about it.
Napkin maths: Assuming you're using the compression and can sustain 9Gb/sec, and that decompression is instant (not true, even for dedicated hardware), you're looking at 90Mb for a 10ms burst. The PS5's SSD is fast, but not that fast.
And the audio IMO.No but the number one reason is most likely cost and maybe BC.
whynotboth.gif
Thanks3DMark Updated with New VRS Benchmark for Tier 2 (Only Available on NVIDIA Turing GPUs)
UL Benchmarks announced to have updated 3DMark with a new Variable Rate Shading benchmark that uses Tier 2 VRS (only on NVIDIA Turing GPUs).wccftech.com
I think he's saying that the GPU and CPU can both run at the full clock the entire time - it just depends on the power budget. Some instructions are more power intensive than others, like AVX instructions on a CPU, and whatever something like Furmark does on a GPU. So, devs will have the option to do really power intensive stuff - they just have to pay it back, so to speak.Based on this quote, it sounds like the GPU will usually be operating at a lower clock and then respond quickly to go higher when doing intensive processing for a short time - in this case Cerny suggests "a few frames". Based on this, it doesn't sound like it'll normally be operating at that peak clock like some posters were suggesting. But equally, it isn't temperature or CPU clock dependent either.
...but really, running full clocks doesn't mean full load. It's not the same as utilization.