Where I'm confused with this is why SmartShift would need to be used at all to move power from one to the other ("the unused portion of the budget goes to the GPU"), if there's enough power for both to run at their peaks simultaneously anyway.
So, I suppose the question is, as we progress further in to the next generation and games are pushing the GPU and CPU much harder than currently, with intensive workloads of advanced graphical effects and ray-tracing, and more CPU utilisation for physics and AI etc, will "most workloads" still be sustaining peak clock? Or as developers move away from cross-gen development to exclusively developing for next-gen, are they going to have to make choices more often between prioritising CPU or GPU workloads?You're missing the 'workload' piece of the puzzle.
Power consumption varies with workload as well as clock.
Cerny expects that in 'most' workloads, 'most of the time', clocks could be sustained by both chips at or near their peak from the system's power budget.
But sometimes that won't be the case. There are workloads of a power intensive type that, when those of that type are running on both cpu and gpu, there isn't enough power supply to run both at their max. That's where the power management unit kicks in to manage clocks, and where smartshift can be used to shift excess power to the gpu.
Developers simply need to specify the ID, the start location and end location and a few milliseconds later, the data is delivered. Two command lists are sent to the hardware - one with the list of IDs, the other centring on memory allocation and deallocation - i.e. making sure that the memory is freed up for the new data.
With latency of just a few milliseconds, data can be requested and delivered within the processing time of a single frame, or at worst for the next frame. This is in stark contrast to a hard drive, where the same process can typically take up to 250ms.
Just means Sony haven't said anything about that particular piece yet. Which is fair, there's still plenty we don't know about the machine.Also, PS5 will not have VRS... or i misinterpreted the article?
Quote:
And at the nuts and bolts level, there are still some lingering question marks. Both Sony and AMD have confirmed that PlayStation 5 uses a custom RDNA 2-based graphics core, but the recent DirectX 12 Ultimate reveal saw AMD confirm features that Sony has not, including variable rate shading.
That just says it's not confirmed. Not that it's confirmed it does not have it.
The discussion about the SSD added a few details to my understanding. Now I understand why developers would be excited about such an innovation. This pleases me greatly :3
.PlayStation 4 Pro was built to deliver higher performance than its base counterpart in order to open the door to 4K display support, but compatibility was key. A 'butterfly' GPU configuration was deployed which essentially doubled up on the graphics core, but clock speeds aside, the CPU had to remain the same - the Zen core was not an option. For PS5, extra logic is added to the RDNA 2 GPU to ensure compatibility with PS4 and PS4 Pro, but how about the CPU side of the equation?
"All of the game logic created for Jaguar CPUs works properly on Zen 2 CPUs, but the timing of execution of instructions can be substantially different," Mark Cerny tells us. "We worked to AMD to customise our particular Zen 2 cores; they have modes in which they can more closely approximate Jaguar timing. We're keeping that in our back pocket, so to speak, as we proceed with the backwards compatibility work."
VRS isn't a MS thing.Yeah I think there is some confusion there that I hope gets cleared up soon.
A lot of folks seem to pushing a "VRS is a MS thing" narrative but it's not quite that simple, IIRC. It may end up being a situation where the same tech is in PS4 but they call it something else if all we're worrying about is the "VRS" moniker.
No, he already stated that GPU will be running mostly at the max clock here he is giving examples that CPU will also be running at MAx frequency if there is no idle state"So, when I made the statement that the GPU will spend most of its time at or near its top frequency, that is with 'race to idle' taken out of the equation - we were looking at PlayStation 5 games in situations where the whole frame was being used productively. The same is true for the CPU, based on examination of situations where it has high utilisation throughout the frame, we have concluded that the CPU will spend most of its time at its peak frequency."
This may give better clues that games will have cpu running at near peak freq than gpu most of the time when heavy graphical scenes are displayed
That exact question came to my mind as well.
Even if the CPU might not need all its power what benefits would the gpu have from gaining more power when it is already at its peak frequency??😳
My wife would be ecstatic. I think she's the only Knack fan, she loved both.
before i watch the video, is this still based on the old cerny presentation or they have talked with him\sony recently to get more accurate details?
.Couple of days prior to the talk going live, Digital Foundry spoke in depth with Cerny on the topics covered. Some of that discussion informed our initial coverage, but we have more information. A lot more.
They talked before the presentation in detail with himbefore i watch the video, is this still based on the old cerny presentation or they have talked with him\sony recently to get more accurate details?
So, I suppose the question is, as we progress further in to the next generation and games are pushing the GPU and CPU much harder than currently, with intensive workloads of advanced graphical effects and ray-tracing, and more CPU utilisation for physics and AI etc, will "most workloads" still be sustaining peak clock? Or as developers move away from cross-gen development to exclusively developing for next-gen, are they going to have to make choices more often between prioritising CPU or GPU workloads?
VRS isn't a MS thing.
VRS Tier 2 has a MS patent but only relating to DXR. I've no doubt Sony will have the same for PS5 as it'll be a standard part of RDNA 2.0.
It is such a simple thing and yet caused so much warring in the other thread, and headache to DF guys, being called out and labelled as biased and PR for one company.No, it's not.
Alex said what is elaborated on in the article - that some developers DF spoke to said they were using locked profiles in order to keep the GPU at 2.23Ghz all the time. Cerny makes a slightly different point here - that he expects processing to run most of the time 'at or near' peak clocks when the chip is busy.
If devs want a 100% locked GPU clock even when not busy, they have a debug profile that lets them do that. And the only way to guarantee it would be to use one of those profiles. But it's a debug profile rather than a release profile it seems.
So Alex was right, and Cerny was right, but they're talking in slightly different contexts and with different degrees of precision I think re. how tightly the clock stays pinned to peak.
So this is a little bit new. An order of milliseconds to get data through the I/O stack. That's pretty exciting because it does open up the possibility of on-demand, intra-frame requesting of some kinds of data.
I wonder how much you could get into memory within, say, 10ms...
They spoke with Mark Cerny.
Yeah I think there is some confusion there that I hope gets cleared up soon.
Too much "secret sauce" and people theorizing capabilities. Unless Sony is going to magically redesign the system, they may as well showcase what they have. It's amazing how much they have bumbled communication and sony have been masters at such.
"Developers don't need to optimise in any way; if necessary, the frequency will adjust to whatever actions the CPU and GPU are performing,...Right now, it's still difficult to get a grip on boost and the extent to which clocks may vary. "
By next GDC we'll probably get a good idea how variable frequencies are on nextgen games
Except both Sony and MS are using "custom" RDNA2, so no guarantees here. VRS is a big feature when it comes to optimizing and performance for future games, MS, Nvidia and AMD aren't pushing it that hard in the PC space for no reason. You'd think Sony would have mentioned it in their tech reveal considering how important it is.VRS isn't a MS thing.
VRS Tier 2 has a MS patent but only relating to DXR. I've no doubt Sony will have the same for PS5 as it'll be a standard part of RDNA 2.0.
I think the bumbling has a lot to do with marketing The Last of Us 2 and Ghosts of Tsushima for later this year. They need those games to look fantastic, but showing PS5 exclusives that blow them out of the water graphically now would possibly take some hype away from them.Too much "secret sauce" and people theorizing capabilities. Unless Sony is going to magically redesign the system, they may as well showcase what they have. It's amazing how much they have bumbled communication and sony have been masters at such.
I wouldn't want that regardless of the virus. big-stage shows are becoming passe, I feel. a video and a journalist venue for hands-on is good enough for meOn Richard's point re: a games-focused reveal, I really wonder when that's going to happen considering the situation with the virus. Will be interesting to see what format that's in. It certainly won't be a big press event...
That's perfectly ok though. Fans would buy those games and be super hyped about how ps5 games will look.I think the bumbling has a lot to do with marketing The Last of Us 2 and Ghosts of Tsushima for later this year. They need those games to look fantastic, but showing PS5 exclusives that blow them out of the water graphically now would possibly take some hype away from them.
Cool, thank you! Things can get confusing when different companies start using their own labels for stuff that might be openly available for all to use.
ie: variable rate refresh/gsync/freesync etc.
Yeah this was interesting and new info.So this is a little bit new. An order of milliseconds to get data through the I/O stack. That's pretty exciting because it does open up the possibility of on-demand, intra-frame requesting of some kinds of data.
I wonder how much you could get into memory within, say, 10ms...
I don't see how this would wear out components any faster. This really isn't an issue with modern processors. It's not something could or should be used as a point of argument in any discussion, I'd say.Dark1x Can you confirm this?
So PS5 variable GPU and CPU speeds means running a very low game or app will make both run at much lower clocks so less risk of overheating when running small games or areas or apps while XSX will run evverything at fixed clocks all the time regardless of the power needed?
Isn't this kinda risky for XSX to run at such heat all the time, even if the cooling system is good enough? This may wear off components much faster.
Ok I was just curious if there was a secondary bus somewhere on the APU so that bandwidth wouldn't be needed to be shared with other CU's (hence the L1 cache is not needed) so that they could potentially use one of the idle CU's as the Tempest CU. The PS4 APU has a secondary 20GBps bus so that L2 and L1 cache can be bypassed (which likely the PS5 APU does as well due to BC concerns).
and in the video, richard immediately follows thats by saying "The fixed profiles that devs are talking about are only used in ps5 dev kits and in retail games their code will tap into the variable clocks for extra performance"It is such a simple thing and yet caused so much warring in the other thread, and headache to DF guys, being called out and labelled as biased and PR for one company.