Interesting comments from someone in the Dev industry in comments section in the latest TheVerge article on PS5 v Xbox XS Link to the article:
https://www.theverge.com/2020/3/18/...ries-x-comparison-specs-features-release-date
He says Xbox XSX is more powerful and easier to develop for, and Microsoft caught Sony off guard with their initial announcement and they have been scrambling ever since.
Link to the user so you don't have to trawl though hundreds of comments:
https://www.theverge.com/users/JR78#activity
Some extracts:
I have been wondering about the following:
The SSD bandwidth is nice, but will not be all that noticeable to most people. Think of a loading screen. 10GB of data going into the system memory on that loading screen would take would take about 1.25 seconds on the PS5 to load with compression techniques and 2.08 seconds on the Xbox.
Once you get over 2GB/s the load speeds to fill RAM are negligible for most people.
For the game to run into a bottleneck with the XSX's SSD, I would imagine that it would need to load in data to the tune of more 2.4 GB/s in a scene. Is that a problem? I don't know anything about the data rates devs are expecting to work with: will 2.4 GB/s be a limiting factor? It's a jump of multiple dozens of factors, whereas the previous generation (Blu-Ray at 9 MB/s to HD at 100 MB/s (?) before accounting for random access overhead) had a smaller jump by a decent bit. Of course, devs ran into IO throughput problems this gen, so it'll be interesting to see if they will expand to fill the speed they are given. I do not have a clue if they will or not.
By the same token:
"10GB of RAM on the Xbox is 112 GB/s faster than the PS5. This is where the high fidelity textures, models, etc would all be loaded. These have much faster throughput through the APU unit because of this, leading to fewer missed cycles and more efficient processing. It is likely that the graphical differences will be greater than what the 18% difference in raw power tells us.
The improved I/O really only relates to loading into memory, but beyond 2GB/s the differences become fairly negligible. With their compression setup, it is likely the Xbox can fill all available RAM in a theoretical 2.7 seconds versus 1.6 seconds for the PS5. Most people will not care about an extra second.
Do we have an idea what RAM bandwidth speeds would be necessary for supporting the GPUs in these systems without any bottlenecking from RAM bandwidth whatsoever? From a quick google search, an RTX 2080 Ti has a 616 GB/s (13.45 TFLOPS), the regular RTX 2080 448 GB/s (10.07 TFLOPS). The RTX 2080 regular seems like a close match for PS5 in raw numbers, and the XSX has somewhat less TFLOPS and less bandwidth than the RTX 2080 Ti. Then, there's the question of whether any of the 336 GB/s data is used for GPU processing: the claim made is that it isn't, but would instead be used for different tasks like audio. Do we know if that fills up the 3.5GB that is free, or does it make more sense that the GPU will draw from the slower memory as well? But even if the claim is true, then still the ratio of TFLOPS to RAM bandwidth seems almost balanced between the two systems. Now, the question is: the claim from that post is that the difference becomes bigger than the 15% FLOPS deficit due to the slower bandwidth. Is this really the case? A related question is whether my ratio talk(FLOPS per RAM bandwidth) is a valid way of looking at it. The ratio is 22.95 GFLOPS per GB/s for PS5 vs. XSX's 21.71 GFLOPS per GB/s (where lower would mean better). Really close ratios, so the difference seems negligible (again, if this is a valid way of looking at it).
Once again, I have no idea what the answers are to this question, so perhaps someone who has some more technical knowledge can weigh in on this?