• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.

gofreak

Member
Oct 26, 2017
7,734
His theory about keeping the OS in the SSD and giving devs 15.5GB of GDDR6 sounds very interesting. I am not sure it will work like that in practice though. Probably more memory will be allocated for OS than .5GB.

Yeah, I think so too.

On a side note, I think for standby, or for the 'optional' low power standby mode Jim Ryan talked about last year, the OS may work more or less entirely against the SSD and it's SRAM. One of the SSD patents talked about that, for a reduced power standby mode. You don't have to keep feeding power to the DRAM in that case while in standby.
 

tzare

Banned
Oct 27, 2017
4,145
Catalunya
Just as an aside 5 GB/s is not enough for an OS. You'd be getting hitched every time an OS function swaps into main memory.

DDR3's slowest speed is about 1 GB/s faster.
But if the full OS running in background isn't more than 3/4GB wouldn't that be enough? At least today consoles reserve that amount of ram for os. And maybe not the full OS needs to be loaded, just a small part would be stored in ram( basic console function and security, 500MB or 1GB), freeing most of it for gaming, but not essential parts of OS like trophy notification or UI when you press PS button doesn't really matter if it needs one second to load.
Just speculation, no tech guy at all
 

zeuanimals

Member
Nov 23, 2017
1,453
So what kind of things will the SSD let devs do, gameplay wise? Faster swinging for Spidey for sure and a lack of tunnels and walking scenes, but what else? Will a Flash game finally be possible?
 

Deusmico

Banned
Oct 27, 2017
1,254
If ps5 can use the ssd as system ram for the os, they can have more ram available, which is also faster than xboxsx slower pool
 

Deusmico

Banned
Oct 27, 2017
1,254
So what kind of things will the SSD let devs do, gameplay wise? Faster swinging for Spidey for sure and a lack of tunnels and walking scenes, but what else? Will a Flash game finally be possible?


Faster background streaming of assets which was a problem for alot of games and caused performance issues
 

tzare

Banned
Oct 27, 2017
4,145
Catalunya
What's the background of the Digital Foundry team in comparison? I assumed they were the same.
I sometimes ask that myself. They are journos so get some info not available as average user for sure, but they started counting pixels and vegetation density early in the ps360 era and went popular, but not sure they have more technical knowledge that people that already post in forums like this.
 
Last edited:
Apr 4, 2018
4,508
Vancouver, BC
Interesting take from NX gamer, but where is he getting this info that games will get 15.5gb of vram, and the OS will be cached to the SSD? I didn't get that take away from Cerny's speech. He would have gloated about it if they could do that. It sounds like more of a pipe dream.

Also, his takeaway that the SSD speed will somehow make up for raw GPU power is something else. Under no circumstances will a faster SSD let the system render more things on screen. What it can do though, is allow the system to stream in assets faster.
 

NXGamer

Member
Oct 27, 2017
372
Great video, it was very informative and I liked the theory about the OS being cached on the SSD. Back when ReRAM was still being discussed I had pondered whether they could just run the OS from ReRAM, freeing up the actual RAM for the games. It was pointed out to me that ReRAM would still be too slow, but I never thought about the possibility that a tiny bit of the OS could run in the RAM and the rest just gets dumped into the SSD.

Hmm, the OS would still have to manage network connectivity, monitor your friends, what they're playing, chat, background music... how much actual RAM would still be needed to manage that? I assume the portion being sent to the SSD would be "suspended" until it was needed so that portion couldn't run any actual functions in that state, or could it?

Anyways, I look forward to more knowledgeable people discussing this :)
Yes, exactly this. The point i am making is the OS remains resident with the Kernal thread, system calls, base UI and notifications etc but all the heavy static data can be offloaded to a SSD Cache. This could take a 3GB OS (as in PS4/X1) into a 1-1.5GB OS instead resident in ram, the core I/O design and latency system means this and many other options are reality now. I will be very interested to see the OS details as to how thye have designed it and are going to use, i suspect something fancy here.

Like I typed elsewhere, on an informed hunch that my colleague John also alluded to, it does not actually work like that. Devs have to choose.

Not sure what you are saying here, but do you mean that Developers need to Choose the speed/frequency of the CPU/GPU for their game at runtime or dynamically?

Just for clarity I am almost 100% sure that is not accurate or what you are alluding to.

Interesting take from NX gamer, but where is he getting this info that games will get 15.5gb of vram, and the OS will be cached to the SSD? I didn't get that take away from Cerny's speech. He would have gloated about it if they could do that. It sounds like more of a pipe dream.

Also, his takeaway that the SSD speed will somehow make up for raw GPU power is something else. Under no circumstances will a faster SSD let the system render more things on screen. What it can do though, is allow the system to stream in assets faster.
So, I never said that (game will get 15.5GB), I stated an example (best case/theory) what I am alluding to here is that with the speed and construction of the system this, and other options, are possible to enable the core kernal, tasks, UI to remain resident and static data to cache. So rather than 3GB of OS, it only used 1-1.5GB of OS when the game is running and then swaps back when home is pressed.

Also, please do not put words in my mouth, I never said that a fast SS replace the GPU. I said that the SSD and I/O contrsuction alongside the other benefits means that when a game is dense with objects, streaming information in the stutters, cache misses, hangs etc etc will be reduced. It will likley always be at a slightly lower resolution (as I cleary state in the video) on the PS5 but to think that all slow down, stutters and hangs are all GPU bound is niave at best or intentially distracting the narrative at worst.
 
Last edited:

Vector

Member
Feb 28, 2018
6,631
Can anyone more knowledgeable expand on the memory differences between the two? As far as I'm aware, PS5's SSD allows it to use north of 15 GB of the system memory while saving a snapshot of the OS onto the SSD while the XSX only has around 13GB of system memory for games. Now I imagine this obviously won't impact image quality, where the XBX should be comfortably ahead, but this combined with blazing fast asset streaming should allow 1st Party games to do some pretty amazing things on the PS5.
 

RestEerie

Banned
Aug 20, 2018
13,618
He spent all of his Xsx deep dive downplaying teraflops. No discussion of RT, VRS, SSD, Mesh Shaders. Personally I don't find that being objective.



as an enterprise engineer working in enterprise tech, seeing people that cannot differentiate their bits from bytes and their float from integer and focusing on the teraflops number as the be-all-end-all benchmark measurement of a system performance is laughable and nostalgic...reminds me of the playground bickering i had in the 90s during the 'bit wars'..............Good times.
 

Brohan

The Fallen
Oct 26, 2017
2,544
Netherlands
Probably way after people realize that storage does not compute.

Storage might not compute but it was by far one of the biggest bottlenecks of this generation alongside the jaguar cpu's. I think it's time that people stop downplaying it and start accepting that this crazy leap in speed will amount to more than just a single second difference in load times when compared to the XsX.
 

Brohan

The Fallen
Oct 26, 2017
2,544
Netherlands
Can anyone more knowledgeable expand on the memory differences between the two? As far as I'm aware, PS5's SSD allows it to use north of 15 GB of the system memory while saving a snapshot of the OS onto the SSD while the XSX only has around 13GB of system memory for games. Now I imagine this obviously won't impact image quality, where the XBX should be comfortably ahead, but this combined with blazing fast asset streaming should allow 1st Party games to do some pretty amazing things on the PS5.

Actually, resolution is not the only thing that impacts image quality. Stuff like higher quality assets and especially having more of them (also for distant objects) should drastically increase the perceived image quality. The SSD should help with streaming in those assets.
 

Dictator

Digital Foundry
Verified
Oct 26, 2017
4,930
Berlin, 'SCHLAND
Yes, exactly this. The point i am making is the OS remains resident with the Kernal thread, system calls, base UI and notifications etc but all the heavy static data can be offloaded to a SSD Cache. This could take a 3GB OS (as in PS4/X1) into a 1-1.5GB OS instead resident in ram, the core I/O design and latency system means this and many other options are reality now. I will be very interested to see the OS details as to how thye



Not sure what you are saying here, but do you mean that Developers need to Choose the speed/frequency of the CPU/GPU for their game at runtime or dynamically?

Just for clarity I am almost 100% sure that is not accurate or what you are alluding to.
Devs will choose whether they want full Power to gpu or full Power to CPU where one or the other underclocks below the listed spec. So a game to game Basis. I imagine most cross gen games will choose to prefer higher clocked gpu Mode as they will be gpu bound even if the Zen cores are underclocked. Zen just runs around the Jag that most cross gen games are not going to worry about CPU time, especially 30 fps games.

That is how it works.
 

Deleted member 20297

User requested account closure
Banned
Oct 28, 2017
6,943
Storage might not compute but it was by far one of the biggest bottlenecks of this generation alongside the jaguar cpu's. I think it's time that people stop downplaying it and start accepting that this crazy leap in speed will amount to more than just a single second difference in load times when compared to the XsX.
There has to be a middle ground because on the other side people are uptalking the SSD (while the competitor actually also has an SSD) like crazy. Add to that, bandwidth is only one metric when it comes to storage, the other one is latency which is the main benefit an SSD has to HDD in general, not necessarily the bandwidth, even Cerny said as much. So please stop this black and white thinking. One has a compute advantage and memory bandwidth advantage, the other has a storage bandwidth advantage.
 

NXGamer

Member
Oct 27, 2017
372
Devs will choose whether they want full Power to gpu or full Power to CPU where one or the other underclocks below the listed spec. So a game to game Basis. I imagine most cross gen games will choose to prefer higher clocked gpu Mode as they will be gpu bound even if the Zen cores are underclocked. Zen just runs around the Jag that most cross gen games are not going to worry about CPU time, especially 30 fps games.

That is how it works.
You mean the can prioritise the frequency to GPU or CPU IF it becomes constrained beyond the ceiling level of the capped rates. This is exactly what happens now, to have a dual mode option for develoers to choose from (ala Switch) is not new but I will be interested in seeing this option and just what the delta's are. Thanks for sharing this which I assume came from a Dev source then?
 

Dictator

Digital Foundry
Verified
Oct 26, 2017
4,930
Berlin, 'SCHLAND
You mean the can prioritise the frequency to GPU or CPU IF it becomes constrained beyond the ceiling level of the capped rates. This is exactly what happens now, to have a dual mode option for develoers to choose from (ala Switch) is not new but I will be interested in seeing this option and just what the delta's are. Thanks for sharing this which I assume came from a Dev source then?
Basically you target and say I want full GPU and the CPU underclocks so the Power Budget keeps the GPU clock high. The Power that would have been CPU Reserved goes over to the GPU to keep it's clock more stable, and since the CPU is now lower clocked, the more intense utilisation or instructions will not tip the Power Balance - well, that is for a game that is also not Absolutely thrashing both.
indeed this Info comes from people who work on the Thing.

Basically, if the Gpu is at 10.2 TF, the cpu is not at 3.5 GHz.


Cerny said all this on stage basically, just not in the most direct way. The only reason to mention smart shift is if this happens, just like it does on smart shift.
 

Brohan

The Fallen
Oct 26, 2017
2,544
Netherlands
There has to be a middle ground because on the other side people are uptalking the SSD (while the competitor actually also has an SSD) like crazy. Add to that, bandwidth is only one metric when it comes to storage, the other one is latency which is the main benefit an SSD has to HDD in general, not necessarily the bandwidth, even Cerny said as much. So please stop this black and white thinking. One has a compute advantage and memory bandwidth advantage, the other has a storage bandwidth advantage.

One side is uptalking the SSD, the other is downplaying the 100% difference in SSD speed while uptalking a 15% difference in compute power.

Alot of people should just take a step back and stop posting for a while I think.
 

tzare

Banned
Oct 27, 2017
4,145
Catalunya
There has to be a middle ground because on the other side people are uptalking the SSD (while the competitor actually also has an SSD) like crazy. Add to that, bandwidth is only one metric when it comes to storage, the other one is latency which is the main benefit an SSD has to HDD in general, not necessarily the bandwidth, even Cerny said as much. So please stop this black and white thinking. One has a compute advantage and memory bandwidth advantage, the other has a storage bandwidth advantage.
But the advantage is different in each area. Compute advantage is around 15/16% , that without considering potential advantages of higher clocks that may slightly offset that % in certain scenarios.
Bandwidth advantage is there, but since there are less TF maybe it's not needed so much since GPU is slightly less capable, and besides that not all memory is faster, just 62% of it, the rest is slower.

On the other side, ssd advantage is much bigger in percentage, it is not 15/20% faster, it is much more than that. If that matters time will tell, but while seems obvious that XSX is more powerful, around 15/20% at most, but if the ssd bet turns into tangible results, then the advantage there would be sensible.

We also miss the most important, price, that will put everything into context here. If same price, i think XSX on paper seems the better deal (at least until Sony proves his approach makes sense).
If one is cheaper than the other, well, that responds for itself.
 

Deleted member 20297

User requested account closure
Banned
Oct 28, 2017
6,943
One side is uptalking the SSD, the other is downplaying the 100% difference in SSD speed while uptalking a 15% difference in compute power.

Alot of people should just take a step back and stop posting for a while I think.
They won't and they should not, just that this is not a black and white discussion, no matter what the percentages say, especially when they are not even comparing the same thing.
 

nelsonroyale

Member
Oct 28, 2017
12,124
Basically you target and say I want full GPU and the CPU underclocks so the Power Budget keeps the GPU clock high. The Power that would have been CPU Reserved goes over to the GPU to keep it's clock more stable, and since the CPU is now lower clocked, the more intense utilisation or instructions will not tip the Power Balance - well, that is for a game that is also not Absolutely thrashing both.
indeed this Info comes from people who work on the Thing.

Basically, if the Gpu is at 10.2 TF, the cpu is not at 3.5 GHz.

That is exactly what Cerny alluded to. From what I understood though, the drops in say frequency when CPU is maxed shouldn't be substantial. He talked about 2-3% drop in frequency for 10% drop in power to the GPU didn't he? Remains to be seen how that holds up.
 

Dictator

Digital Foundry
Verified
Oct 26, 2017
4,930
Berlin, 'SCHLAND
That is exactly what Cerny alluded to. From what I understood though, the drops in say frequency when CPU is maxed shouldn't be substantial. He talked about 2-3% drop in frequency for 10% drop in power to the GPU didn't he? Remains to be seen how that holds up.
We are going to be asking devs about System Reserved amount for ram and what exactly the CPU or gpu clocks are in each Mode.
I do not think it will have a decidedly different RAM Reservation for the OS. SSDs do not have random access latency like RAM, we use RAM for a reason.
 

Desodeset

Member
May 31, 2019
2,325
Sofia, Bulgaria
Devs will choose whether they want full Power to gpu or full Power to CPU where one or the other underclocks below the listed spec. So a game to game Basis. I imagine most cross gen games will choose to prefer higher clocked gpu Mode as they will be gpu bound even if the Zen cores are underclocked. Zen just runs around the Jag that most cross gen games are not going to worry about CPU time, especially 30 fps games.

That is how it works.

Can you make a guess about theoretical underlocks if devs prioritize CPU or GPU?
 

Brohan

The Fallen
Oct 26, 2017
2,544
Netherlands
They won't and they should not, just that this is not a black and white discussion, no matter what the percentages say, especially when they are not even comparing the same thing.

I agree with the latter part of your post. Not sure I agree with the first part though. There's alot of people making strongly opinionated yet uninformed posts because they want their box of preference to be on top. I think there would be a better discussion to be had and could discuss the advantages of both systems in a more constructive manner without those that downplay everything of the box they aren't interested in.

Having said that I know fully well that those people aren't going anywhere nor will they stop posting.
 

RedSeim

Banned
Sep 24, 2019
65
Yeah that's a weird claim considering the RAM runs at 50-100 times faster than even this magic SSD. You could fill and empty it in couple of seconds, but that SSD is no match to actual RAM functions.
So that's exactly why this SSD could be used as a kind of RAM memory just for some functionalities, yes, but in the end, you could. And that's the point. Until now, this was something literally impossible: Jumping over the step of loading some kind of data on the RAM memory and doing it directly from an SSD. What this means in terms of performance still remains to be seen, of course. But it might be something. If we pass by the forums and pretend to be minimally experts in hardware matters (which generally we are not, even those who try their hardest to pretend it) whe should, at least, consider this is something and not ignore it as if it was bullshit. THAT would be ignorant, in fact.
 

NXGamer

Member
Oct 27, 2017
372
Basically you target and say I want full GPU and the CPU underclocks so the Power Budget keeps the GPU clock high. The Power that would have been CPU Reserved goes over to the GPU to keep it's clock more stable, and since the CPU is now lower clocked, the more intense utilisation or instructions will not tip the Power Balance - well, that is for a game that is also not Absolutely thrashing both.
indeed this Info comes from people who work on the Thing.

Basically, if the Gpu is at 10.2 TF, the cpu is not at 3.5 GHz.


Cerny said all this on stage basically, just not in the most direct way. The only reason to mention smart shift is if this happens, just like it does on smart shift.
Yes, so you allocate a power/frequency budget target, that is as expected and is standard, the fluctuation levels of hardware are constant anyway. I will be interested to see the ratio they are working with as you are making it sound they chose to "effectively" down-clock the CPU below 3.5GHz, let's say 3GHz for example so that at this level the GPU is always at full capacity, this kind of locked state is quite odd as it will leave resource potential of the table, much like Dynamic resolutuon scaling makes best use of the constant fluctuating needs a GPU has, leaving these to self regulate makes the most sense. For Sony to lock this down at runtime for the devs, as you are saying, is a very odd choice indeed even if this will rarely cause an issue it wastes performance OR means the Thermals are still an issue.
 

Deleted member 20297

User requested account closure
Banned
Oct 28, 2017
6,943
I agree with the latter part of your post. Not sure I agree with the first part though. There's alot of people making strongly opinionated yet uninformed posts because they want their box of preference to be on top. I think there would be a better discussion to be had and could discuss the advantages of both systems in a more constructive manner without those that downplay everything of the box they aren't interested in.

Having said that I know fully well that those people aren't going anywhere nor will they stop posting.
There really is some interesting tech inside the consoles and just yesterday I asked DF people if they could shed some light on the differences between the lossless compression asics and the respective algorithms these asics implement, Kraken vs BCPack and Zlib but it seems no one wants to have that discussion as it seems to be too technical and unpopular. It seems more interesting to go the "simple math path" and trying to construct edge cases where one console could outperform the other, all done by actual arm chair devs.
 

Brohan

The Fallen
Oct 26, 2017
2,544
Netherlands
We are going to be asking devs about System Reserved amount for ram and what exactly the CPU or gpu clocks are in each Mode.
I do not think it will have a decidedly different RAM Reservation for the OS. SSDs do not have random access latency like RAM, we use RAM for a reason.

Since 2.23 is the cap for the GPU and 3.5 is the cap for the CPU, with Cerny having said that it only takes a drop of a few % in clock speeds to drop the power draw by 10%... I can't imagine that either the GPU or CPU would clock down by very significant numbers right?

I'm very interested in knowing what the bottom is for either the CPU or GPU for the other to run at full speed.
 

RedSeim

Banned
Sep 24, 2019
65
Probably way after people realize that storage does not compute.
Except when that storage is made to be used as a literal performance booster.
Storage does not compute, for instance, on PCs, when it is not designed to help increase the general performance of a system. On PCs it is used just as, that, storage, and therefore, the only thing that storage can do for you is reducing loading times.

This is not the case. Maybe you will end up understanding it. That doesnt seem feasible, though.
 

Deusmico

Banned
Oct 27, 2017
1,254
Since 2.23 is the cap for the GPU and 3.5 is the cap for the CPU, with Cerny having said that it only takes a drop of a few % in clock speeds to drop the power draw by 10%... I can't imagine that either the GPU or CPU would clock down by very significant numbers right?

I'm very interested in knowing what the bottom is for either the CPU or GPU for the other to run at full speed.

for games that will require full power from both gpu/cpu i guess they would drop the 2% or more to be around 10tf and keep the cpu as close to the 3.5ghz.
 

Desodeset

Member
May 31, 2019
2,325
Sofia, Bulgaria
It is also very interesting approach, because early PS5 units will not be able to keep 3.5 ghz CPU and 2.23 ghz GPU, but later revisions will probably have more stable chips and improved cooling. Can we witness another Xbox One/Xbox One S situation?
 

Brohan

The Fallen
Oct 26, 2017
2,544
Netherlands
There really is some interesting tech inside the consoles and just yesterday I asked DF people if they could shed some light on the differences between the lossless compression asics and the respective algorithms these asics implement, Kraken vs BCPack and Zlib but it seems no one wants to have that discussion as it seems to be too technical and unpopular. It seems more interesting to go the "simple math path" and trying to construct edge cases where one console could outperform the other, all done by actual arm chair devs.

Yeah I personally would love to see stuff like that being discussed by those with actual experience on the matter. Even if that means I might not completely understand whatever is being discussed. It's always fun to learn. It's hardly any fun to see people battle about numbers they probably don't even understand.
 

Decarb

Member
Oct 27, 2017
8,633
So that's exactly why this SSD could be used as a kind of RAM memory just for some functionalities, yes, but in the end, you could. And that's the point. Until now, this was something literally impossible: Jumping over the step of loading some kind of data on the RAM memory and doing it directly from an SSD. What this means in terms of performance still remains to be seen, of course. But it might be something. If we pass by the forums and pretend to be minimally experts in hardware matters (which generally we are not, even those who try their hardest to pretend it) whe should, at least, consider this is something and not ignore it as if it was bullshit. THAT would be ignorant, in fact.
Maybe some basic function can be offloaded and called back when user requests it. For eg I'm preeeetty sure when you suspend/resume multiple games on SX, their entire memory state is offloaded to SSD wholesale, and loaded back when you switch to them. The SSD tech in SX is no slouch either.
 

Deusmico

Banned
Oct 27, 2017
1,254
It is also very interesting approach, because early PS5 units will not be able to keep 3.5 ghz CPU and 2.23 ghz GPU, but later revisions will probably have more stable chips and improved cooling. Can we witness another Xbox One/Xbox One S situation?

false, the chip quality doesnt have an effect (but it will on production for sony)
 

Brohan

The Fallen
Oct 26, 2017
2,544
Netherlands
for games that will require full power from both gpu/cpu i guess they would drop the 2% or more to be around 10tf and keep the cpu as close to the 3.5ghz.

It's going to be very interesting to see what devs choose to go for and it will most likely depend heavily on the type of game they are making.

Making a game that is both CPU and GPU intensive? - - > CPU at 3.4 and GPU at 2.15

GPU intensive game? - - - > CPU at 3.3 and GPU at 2.23

CPU intensive game? - - - > CPU at 3.5 and GPU at 2.10


I imagine it will be something like this. Ofcourse I pulled the numbers from my ass so it's going to be interesting to see what the actual numbers are.
 

Deleted member 20297

User requested account closure
Banned
Oct 28, 2017
6,943
Except when that storage is made to be used as a literal performance booster.
Storage does not compute, for instance, on PCs, when it is not designed to help increase the general performance of a system. On PCs it is used just as, that, storage, and therefore, the only thing that storage can do for you is reducing loading times.
That's why storage was a major point for MS in planning their new console while not forgetting the actual compute units. They made their velocity engine, made an asic for Zlib and BCPack realtime lossless compression, used an SSD that guarantees sustained bandwidth rates and developed directstorage.
On the other hand, it also helps on PC to help computing units but it does never compute in any case, at least not contributing to the actual load of a PC or console, while its controller can save up CPU cycles. Microsoft explained how their custom lossless compression ASIC saves CPU cores.
 

Dark1x

Digital Foundry
Verified
Oct 26, 2017
3,530
I sometimes ask that myself. They are journos so get some info not available as average user for sure, but they started counting pixels and vegetation density early in the ps360 era and went popular, but not sure they have more technical knowledge that people that already post in forums like this.
I don't want to jump into the arguments but I'm a software engineer by trade with more than 10 years experience working in the automotive industry. I only switched to doing DF stuff when we left for Europe as it was a chance to do something I was more passionate about. Definitely NOT a journalist by training at all but we have one of the best ensuring that THAT side of things gets done right.

I will say, what Alex is alluding to in this thread is based on discussions with multiple people doing actual work on this box so keep that in mind...

There really is some interesting tech inside the consoles and just yesterday I asked DF people if they could shed some light on the differences between the lossless compression asics and the respective algorithms these asics implement, Kraken vs BCPack and Zlib but it seems no one wants to have that discussion as it seems to be too technical and unpopular. It seems more interesting to go the "simple math path" and trying to construct edge cases where one console could outperform the other, all done by actual arm chair devs.
Good lord, dude, you can't ask yesterday and expect content immediately. Even if we were to jump right on it, it takes a lot of time to create that video! I think that's a technical topic that should occur when we have real data from software that can illustrate the differences.
 

Deleted member 20297

User requested account closure
Banned
Oct 28, 2017
6,943
Good lord, dude, you can't ask yesterday and expect content immediately. Even if we were to jump right on it, it takes a lot of time to create that video! I think that's a technical topic that should occur when we have real data from software that can illustrate the differences.
Sorry, I didn't get an answer back so thought it was lost or ignored. I am not demanding anything, I even asked elenarie but of course that person is under NDA so that was expected. A simple "we might look at it" would have been sufficient but hearing nothing indicates a "nope, won't do".
 

Patitoloco

Banned
Oct 27, 2017
23,595
New thread worthy imo, because there is a lot of misconception that it doesn't work like that at all.
That's not news though, Cerny said it right away. He also said that "in that worst case scenario" a 10% reduction in power on one of them would only drop a few points in the clock, so they expect that to be almost unnoticeiable.
 

Dark1x

Digital Foundry
Verified
Oct 26, 2017
3,530
Sorry, I didn't get an answer back so thought it was lost or ignored. I am not demanding anything, I even asked elenarie but of course that person is under NDA so that was expected. A simple "we might look at it" would have been sufficient but hearing nothing indicates a "nope, won't do".
I didn't see it here, though, to be honest. The sheer volume of replies and questions people have been throwing towards us makes it tough to respond to anything. Only this morning is it quieting down! It's all good, though.
 

amstradcpc

Member
Oct 27, 2017
1,768
Absolutely thrashing both.
indeed this Info comes from people who worsically, if the Gpu is at 10.2 TF, the cpu is not at 3.5 GHz.
I think it doesnt work like that. They are monitoring transistors occuped in cpu and gpu continously while running max clock in both. If they reach an occupancy point (for example all the cus and registers busy) is when the smartshift enters in the game reducing cap clock from cpu or gpu.
For example: a real time scene from Uncharted takes 95% of transistors in both cpu and gpu. They have a profile and know that with that occpancy they are over the 220 watts cap of the cooling solution, so they start downclocking. And that scene will hit that point in all PS5s out there.