Compared to an HDD.... Not the difference between a 5.5GB/s vs 2.5GB/s..
He post also said the XSX was good but he thought the PS5 was better .
First post of the devs react to PS5 thread.
Compared to an HDD.... Not the difference between a 5.5GB/s vs 2.5GB/s..
Wow. Just wow.And, I have a hard time listening to his opinions on this specific subject, to be honest, as he has not said a single positive thing about PS5, nor a single negative thing about Series X. People are allowed to have biases (and that doesn't mean the business as a whole, ie Digital Foundry, present things from a biased point of view, because I believe their videos are pretty objective), and I just believe the guy has a strong bias.
I delver further into why in his reply to me.
Actually, Mr. Cerny gave a possible example of just this. He pointed out that the PS5 I/O speed may be sufficient to completely exchange data at the same pace typical for camera movement. That is, the tiotal GPU resources could be concentrated solely in the view frustum. This would result in greater visual detail within FOV than the same amount of compute power could provide without the I/O speed. Series X, being lower on the continuum, might either need to render less detail overall, or else have visible pop-in.
This depends on the just-in-time multiplier provided by the I/O, of course; no matter how fast the data comes through, the GPU still has to compute with it. The XSX with its greater resources will do that faster. The open question is how these two advantages stack up, and I agree we'll need to see actual software to start forming an opinion about that. Maybe such streaming will be too difficult in practice. But it's a logically plausible differentiation.
Compared to an HDD.... Not the difference between a 5.5GB/s vs 2.5GB/s..
I understand your need to dispel misinformation. But misinformation is coming from all directions and you choose only to defend the Series X from that which makes it sound inferior in any way. Surely all the developers that are excited about specifically PS5's SSD setup care more about it than just the fact it will be able to load a little bit faster than the SSD setup Series X went with, and yet you claimed that a slight decrease in loading WOULD be the only difference. You've said unequivocally that if CPU runs at Max frequencies on PS5, that GPU cannot, and vice-versa. This directly contradicts Mark Cerny's talk saying that BOTH the CPU and GPU spend most of their time at max speed. Both cannot spend most of their time on max speeds if only one can do it at a time. But you haven't clarified why you are saying something different than Mark Cerny is. Either both run at max speeds most of the time or one does while the other doesn't.
I'm a big fan of your DF videos. Don't get me wrong. You're very knowledgeable and are part of a great team. And I agree that Series X has a better CPU/GPU setup, as anybody would. I think Series X will usually have the slightly better versions of multiplat games. But your narrative thus far is that the only advantage PS5 has is slightly better loading speeds. Maybe that's because Series X falls closer in line with traditional PC design philosophy, and PC is your system of choice; I don't know. Developers seem a lot more excited about the PS5's SSD setup than you do.
one frame is 33 milliseconds or 16.6. You are not teleporting on one frame with 100% different data from the SSD in memory on PS5 either. If a game is tying its streaming of detail to a certain pop in distance in front of the camera at a speed, then you could actually just reduce that distance of the threshhold to make the in game world speed possible on an ssd of lower throughput. The magnitude of difference between the PS5 and XSX SSD is not the same insane magnitude of speed distance between that of a 5400 RPM HDD and an NVME where things are just flat out impossible.
I completely agree with that. We don't know what benefits Cerny's vision will have over MS's. We only have seen some devs say they're happy Sony went the way they did.There's no denying that a switch to SSD as a baseline may change the way some games are built. There's one difference though:
There's a narrative that 5.5GB/s is some magic numbers as a baseline and that anything under it means it'll have to be downscaled heavily in term of game design, as if it was some minimum to achieve things or as if everyone is going to max out such numbers.
Explain this please. I can see well how you could have higher quality baked shadows and baked indirect lighting so that its texel size increases, but in which way does improve the quality of the geometry in the current frame? Is the current limit of geometry in game engines as a result of VRAM storage space or some function of the GPU of triangle size shading/culling and geometry throughput?
Just want to say thanks Dictator for being a valued member of this community.
Yeah I indeed mention that right in that post I think. The speed of LOD changes in my response to him.Of course not, but it might be a visually perceptible difference to the LoD behaviour if one can gets closer to optimal lod behaviour than the other. Hence, an effect on visuals.
Of course XSX could do that too. But unlike in the conjectured PS5 case, there would be a noticeable lag between the user hitting the Guide button, and actually being in the OS. So possible, but with no QOL improvement as proposed for Sony.They did mention that, but at least for the case that NX Gamer mentioned (offloading the OS to the SSD while running the game) it's not clear why that requires 5.5 GB/s, and why it would not be possible with the XSX's 2.4 GB/s.
This is incorrect. The 2.5GB/s and 5.5GB/s raw numbers don't account for differing compression approaches. But both platform holders have also given real-world average metrics including compression: 8-9GB/s for PS5, and 4.8GB/s for XSX. Note that the ratio is smaller here; that's presumably the effect of Microsoft's other implementations. But the performance is still at least 67% higher on PS5, up to 95%.And yet... with MS' superior BCPack texture compression and SFS/Texture space shading they could literally reduce the amount of data needed to stream over that bandwidth... thus accomplishing the same thing.
https://forum.beyond3d.com/threads/...-gdc-2020-xbsx-ps5.61641/page-33#post-2112883
I hope the link works. Surfing on your phone pro: finding this only took me typing "beyond". Con: ERA's UI can be tricky.
Edit: And new posts containing your links don't automatically show up before you post 😄
You don't know that. You don't know what BCPack could possibly do.Of course XSX could do that too. But unlike in the conjectured PS5 case, there would be a noticeable lag between the user hitting the Guide button, and actually being in the OS. So possible, but with no QOL improvement as proposed for Sony.
This is incorrect. The 2.5GB/s and 5.5GB/s raw numbers don't account for differing compression approaches. But both platform holders have also given real-world average metrics including compression: 8-9GB/s for PS5, and 4.8GB/s for XSX. Note that the ratio is smaller here; that's presumably the effect of Microsoft's other implementations. But the performance is still at least 67% higher on PS5, up to 95%.
You love that number don't you? The X is far superior when it comes to Ray Tracing by a significant percentage and I am not even going to start praising it. Numbers on paper is one thing, the execution is another.This is in no way comparable to Blast Processing, which required extensive low-level optimization and for developers to go out of their way. The PS5 SSD is very simple. There is nothing to optimize, it's 129% faster. Period. But great contribution, mate.
Have you read my post? There is no magic. There are no unicorns. There are just facts. I don't know where you get the impression that there has to be a certain number. There is no threshold. Every improvement in performance will allow more possibilities for game developers. It just so happens to be, that the PS5 SSD is 129% better than the competition, which means that there will be significant improvements possible.
one frame is 33 milliseconds or 16.6. You are not teleporting on one frame with 100% different data from the SSD in memory on PS5 either.
If a game is tying its streaming of detail to a certain pop in distance in front of the camera at a speed, then you could actually just reduce that distance of the threshhold to make the in game world speed possible on an ssd of lower throughput.
The magnitude of difference between the PS5 and XSX SSD is not the same insane magnitude of speed distance between that of a 5400 RPM HDD and an NVME where things are just flat out impossible.
Explain this please. I can see well how you could have higher quality baked shadows and baked indirect lighting so that its texel size increases, but in which way does improve the quality of the geometry in the current frame? Is the current limit of geometry in game engines as a result of VRAM storage space or some function of the GPU of triangle size shading/culling and geometry throughput?
You love that number don't you? The X is far superior when it comes to Ray Tracing by a significant percentage and I am not even going to start praising it. Numbers on paper is one thing, the execution is another.
There's no denying that a switch to SSD as a baseline may change the way some games are built. There's one difference though:
There's a narrative that 5.5GB/s is some magic numbers as a baseline and that anything under it means it'll have to be downscaled heavily in term of game design, as if it was some minimum to achieve things or as if everyone is going to max out such numbers.
That's not even a Beyond3D Post...it is a link to a post from the old site. Which is banned here for many obvious reasons.
Right off the bat, there seems in my eyes to be an oversight: the post waives away the 3.5 GB of RAM that the XSX has as wasted RAM, but I would think that in reality it would be used for less bandwidth constrained tasks (I've read that audio is a good example in the XSX articles). The 8GB pool of free memory attributed to the PS5 would be reduced for similar non-GPU tasks, so a better comparison number would be that same 5 GB pool (disregarding the argument of offloading the OS to disk for the moment).The Place That May Not Be Mentioned said:What about PS5 then? Is it just 2x faster and that's it?
Not really.
The whole 8GB of the RAM we have "free" can be a "streaming pool" on PS5.
But you said "we cannot load while frame is rendering"?
In XSeX, yes.
But in PS5 we have GPU cache scrubbers.
This is a piece of silicon inside the GPU that will reload our assets on the fly while GPU is rendering the frame.
It has full access to where and what GPU is reading right now (it's all in the GPU cache, hence "cache scrubber")
It will also never invalidate the whole cache (which can still lead to GPU "stall") but reload exactly the data that changed (I hope you've listened to that part of Cerny's talk very closely).
But it's free RAM size doesn't really matter, we still have 2:1 of old/new in one frame, because SSD is only 2x faster?
Yes, and no.
We do have only 2x faster rates (although the max rates are much higher for PS5: 22GB/sec vs 6GB/sec)
But the thing is, GPU can render from 8GB of game data. And XSeX - only from 2.5GB, do you remember that we cannot render from the "streaming" part while it loads?
So in any given scene, potentially, PS5 can have 2x to 3x more details/textures/assets than XSeX.
Yes, XSeX will render it faster, higher FPS or higher frame-buffer resolution (not both, perf difference is too low).
But the scene itself will be less detailed, have less artwork.
Yeah I indeed mention that right in that post I think. The speed of LOD changes in my response to him.
At high Camera speed, presuming that the i/o is the limiting factor for something and not geometry throughput of the GPU (as assets do not have draw distance and LOD Ranges due to i/ only, but usually questions of shading and overdraw are very important), then PS5 would have the advatange of less temporal lag on their draw in or perhaps the perceptual Range and which an object dissolves in
Really don'y see how the X is far superior when it comes to Ray Tracing .
Better yes but not far superior with 15% give or take.
And racing games are typically not very demanding to render, so there's more possibility to spend budget on RT without impacting the rest of the presentation.
Lockhart would also be rendering much lower resolution, which will reduce the calculations needed. Even if it's only 4TF as rumored, number of rays should be about 1/3rd of XSX. If resolution is only 1/4 (1080p vs. 2160p), there shouldn't be much problem. This rough number does seem to indicate lowered RT quality if Lockhart renders at 1440p, or if XSX is rendering below full 4K.
It's confirmed. The way Microsoft promoted their RT performance is by saying XSX can do 380 billion intersections per second. This number is amount of TMUs times clockspeed. (TMU is Texture Mapping Unit, of which there are 4 per CU.) This gives an exact figure of 379.6; doing the same calculation for PS5 gives a value of 321.1. That's 15% lower, the exact same gap as general compute...which makes sense, since it's dependent on the same two things, amount of CUs and clock.
In other words, if a PS5 game is 15% lower resolution, then it should have the same quality of RT. Or, XSX could have 15% better RT, but then the two games would run at the same resolution. (In general, logical terms only, of course; real results will differ slightly from game to game.)
The GPU operates on data stored in its local caches. When those calculations are done, the results are sent back to RAM and new source data is loaded from there. When the GPU clockspeed is high, calculations finish faster so you need to refill the RAM more often. But GPU clockspeed doesn't apply to RAM, which has its own invariant bandwidth. That means the windows where RAM will accept requests are farther apart, compared to the amount of math you're doing.
This is why people with technical know-how think the RAM bandwidth may be a limiting factor for both PS5 and XSX performance. The way both platforms might not be bandwidth-starved is if they can keep as much data in the local caches as possible, reducing trips to RAM. But also, AMD has worked to reduce bandwidth needs for the same amount of work by changing the architecture.
This may also be ameliorated by the fact that PS5 is likely to have more L2 cache per CU than XSX.
Neither platform will fully utilize the RAM available on what is directly in veiw, but if Cerny is right, there could potentially be more to work with at a given time with the right engine optimizations.Right. Or potentially LoD, mip levels, on textures.
I think that's the kind of stuff people talk about wrt 'ssds and visuals'. In the end both systems presumably have the same RAM for games, so once the camera settles down, and you don't have a temporal component, they could converge on the same asset detail or whatnot. But there's potential for variance in the temporal behaviour.
This is ignoring the impact of clock speed. Also Liabe Brave actually made a post about RT on the previous page. Be interesting to see how it all pans out.
NX Gamer: PS5 Full Spec Analysis | A new generation is Born
I guess the speed at which the GPU accesses the memory is defined by the memory speed so is unaffected by GPU clock boosts?www.resetera.com
Still one of the best threads on Era in a while imo. It's super fun to learn about these consoles. I wouldn't call the conversation rotten just passionate. We all know these boxes are coming in with differing capabilities, exclusive games, and most likely prices. Unless money is no object, I think most people if they can afford a new console can really only afford 1 (and do justice to it by buying several games, extra controllers, etc). So although both are beasts, although both will get great games and services, the question 'which is best?' remains extremely relevant to anyone watching with purchase interest. Taking the 'both a great and equal' perspective is actually frustrating and not helpful. People need to decide where their $1000 is going this year and nobody wants to toss a coin to decide.this thread turned real rotten but i just want to chime in and say that i'm glad both consoles seem to be beefcakes this time around
i can't wait to see Spider-Man 2 and whatever Naughty Dog's new thing end up being (space opera pls) running on this thing
Neither platform will fully utilize the RAM available on what is directly in veiw, but if Cerny is right, there could potentially be more to work with at a given time with the right engine optimizations.
So you are saying both systems don't maintain maximum clocks and fluctuate and the max drop the PS5 would have is 2% on the clocks. Why go for this system at all? Why not just lock the peak at 2% less and not have variability that developers have to plan for. Even Dictator from DF has said developers told him they have to choose from profiles providing more GPU at the cost of CPU for example. Seems like a waste of time just to squeeze out 2% extra in clocks for the GPU if the max it will ever drop is 2%.
(edited to shorten)
At high Camera speed, presuming I/O is the limiting factor and not geometry throughput of the GPU, PS5 would have the advantage of less [pop in].
That alone is such a damn blessing paired with an insane I/O throughput unlike anything seen before in video games development as a baseline. First party will 100% make heads spin with next generation as well. Cannot freaking wait honestly, because it will absolutely be a rising tide situation. Everyone including 3rd party will step their engines and games up once again. It shall be glorious.Worldwide Studios' baseline hardware is PS5, and PS5 only. So they can work on games that fully utilise the SSD, Ray-Tracing, and overall architecture of the machine. The same cannot be said about Microsoft, they will apparently have Lockhart, and also PCs to target, and with that being said, do we honestly think they're going to have a minimum requirement of an SSD (at a specific I/O throughput) and RTX as a minimum PC requirement? Doubtful.
Yes, compression improves real throughput above the raw transfer speed. But as I said, we do know what Microsoft's compression can possibly do--they told us! They said real transfer speed with their compression is 4.8GB/s, not 2.4GB/s. Similarly, Sony said real transfer speed with their compression is 8-9GB/s, not 5.5GB/s. So with compression, Sony is currently between 67% and 95% faster, not 129%.You don't know that. You don't know what BCPack could possibly do.
How about we wait and see? The fact remains though... better compression means better utilization of RAM size, AND bandwidth.
Yes, a difference in RT could be visible, because XSX has more resources available. But if so, those resources will not be available for other rendering tasks. So the game might run at the same resolution and other graphical settings, but with ~15% better RT on Series X. On the other hand, if the extra power of XSX is used to further improve resolution, then there's no longer any extra to do additional RT. So the game would have the same RT settings as PS5, applied to a slightly more detailed image.
Right, that's throughput though... however you have to remember that BCPack can possibly compress textures far better than Kraken which is more general purpose. The majority of game data is texture data... If MS' compression+BCPack allows for more data to be stored in the same amount of RAM then MS can require less bandwidth to transfer the same amount of data.Yes, compression improves real throughput above the raw transfer speed. But as I said, we do know what Microsoft's compression can possibly do--they told us! They said real transfer speed with their compression is 4.8GB/s, not 2.4GB/s. Similarly, Sony said real transfer speed with their compression is 8-9GB/s, not 5.5GB/s. So with compression, Sony is currently between 67% and 95% faster, not 129%.
Why do I have to say positive things about a piece of hardware, when I am trying to clear up misconceptions that I read here on the forum about said hardware? Would you prefer I type "Actually, you cannot run path tracing on the ssd. But PS5 has a nice GPU". That would be sycophantic and non-sequitor.
I have been clearing up misconceptions in postings about Xbox Series X as well regarding it's RT or Raster performance... Based on the video work I have done on it. You should see the hell I get there for my apparent biases against the Xbox Series X for telling people it did not perform like an RTX 2080 Ti or better in Gears 5!
If you are curious as to how I view the consoles since the information came out: I just think the GPU and CPU setup is nicer in Xbox Series X, carrying the exact same line of thinking that my employer has basically also stated. It is a % amount that we can talk about. It is not nebulous and unproven (like Power of the Cloud was, for example). I also think the PS5 and Xbox Series X are gonna be great performance wise, be very similar in performance aand resolution in games that use rasterisation, and I am incredibly happy they are both targeting unique high specs in comparison to the Xbox One and PS4. Regarding the PS5 - I am so happy they are already targetting games with RT as Cerny Mentioned. Just wish he showed it actually running RT.
Sadly, console wars on this forum make it so that people feel the need to prove weird nonsensical theories as to why one of these boxes is game changingly better than one or the other... with absolutely no evidence. So yeah, I feel compelled to ground posts in reality that are filled with baseless conjecture about the SSD/GPU/CPU/RAM in the PS5/XSX. Especially since most if not all of such posts exist to just try and raise one of these consoles above the other... for console war reasons.
Thanks for posting this - and all your bolded parts are namely exactly why it is important. Sony and Mark Cerny talking about the variable clock was to highlight that it can and will. Otherwise they would just leave that information unknown and opaque to the audience and only tell developers if it was a 99.98% thing. Developers need to know how and why frequency will drop and a "priority mode" is IMO a super smart way to do it. A game can get a constantly and reliably faster GPU if the CPU isn't important to the game anyway. That is good design.
Right, that's throughput though... however you have to remember that BCPack can possibly compress textures far better than Kraken which is more general purpose. The majority of game data is texture data... If MS' compression+BCPack allows for more data to be stored in the same amount of RAM then MS can require less bandwidth to transfer the same amount of data.
Right, that's throughput though... however you have to remember that BCPack can possibly compress textures far better than Kraken which is more general purpose. The majority of game data is texture data... If MS' compression+BCPack allows for more data to be stored in the same amount of RAM then MS can require less bandwidth to transfer the same amount of data.
I agree that people need info to make a major purchase decision, but 99% of the market don't need the info in this thread to make a purchasing decision. Since generations began, power has never been a determinant. The info people really need is price and games. People were willing to buy ps2s, 360s, and switches at record pace despite power concerns from online enthusiasts.Still one of the best threads on Era in a while imo. It's super fun to learn about these consoles. I wouldn't call the conversation rotten just passionate. We all know these boxes are coming in with differing capabilities, exclusive games, and most likely prices. Unless money is no object, I think most people if they can afford a new console can really only afford 1 (and do justice to it by buying several games, extra controllers, etc). So although both are beasts, although both will get great games and services, the question 'which is best?' remains extremely relevant to anyone watching with purchase interest. Taking the 'both a great and equal' perspective is actually frustrating and not helpful. People need to decide where their $1000 is going this year and nobody wants to toss a coin to decide.
All compression efficiencies are already included in the real world numbers. Remember, the actual throughput of the physical hardware is 5.5 or 2.4GB/s. More than that is literally impossible to shove down the connection. Therefore, when metrics are given that "exceed" that, it's meant to indicate what size the data would be if uncompressed. That is, 4.8GB of data comes off the SSD, is compressed down to 2.4GB, and passes to RAM in one second. On PS5 the compression seems to be less efficient, so 4.8GB off SSD becomes ~3.1GB to pass to RAM. However, it can also move faster through the pipes, so the lesser compression is more than made up for, and the same 4.8GB is transferred more quickly.Right, that's throughput though... however you have to remember that BCPack can possibly compress textures far better than Kraken which is more general purpose. The majority of game data is texture data... If MS' compression+BCPack allows for more data to be stored in the same amount of RAM then MS can require less bandwidth to transfer the same amount of data.
Worldwide Studios' baseline hardware is PS5, and PS5 only. So they can work on games that fully utilise the SSD, Ray-Tracing, and overall architecture of the machine. The same cannot be said about Microsoft, they will apparently have Lockhart, and also PCs to target, and with that being said, do we honestly think they're going to have a minimum requirement of an SSD (at a specific I/O throughput) and RTX as a minimum PC requirement? Doubtful.
Right... but if MS' compression is better... it can fit MORE data in the same amount of RAM. In your example Sony has filled up 3.1GB of RAM in 1 second, where as MS only has filled up 2.4GB... for the same amount of data. This gives MS more memory to work with. If they can fit more data in the same amount of RAM.. then they can reduce the amount of pressure streaming in new data from Storage to RAM.All compression efficiencies are already included in the real world numbers. Remember, the actual throughput of the physical hardware is 5.5 or 2.4GB/s. More than that is literally impossible to shove down the connection. Therefore, when metrics are given that "exceed" that, it's meant to indicate what size the data would be if uncompressed. That is, 4.8GB of data comes off the SSD, is compressed down to 2.4GB, and passes to RAM in one second. On PS5 the compression seems to be less efficient, so 4.8GB off SSD becomes ~3.1GB to pass to RAM. However, it can also move faster through the pipes, so the lesser compression is more than made up for, and the same 4.8GB is transferred more quickly.