It's a Series X advantage, but it's exaggerated as the PS5 version is intentionally rendered lower.
Do you have a source for such a statement? Because I don't think the PS5 version got intentional hold back by the developers. In my opinion they've decided to go for 99,5% flawless 60FPS and achieved this goal on Xbox Series X at 4K, while PS5 would've probably dropped more at higher resolution. Therefore they've decided to go with a resolution that allows them to achieve 100% 60FPS. The foliage part is interesting, because iirc PS is simply better at this due to higher frequencies.
The difference between them is 17.5%, not 30% as the resolution difference is here.
Teraflops don't matter until they do. There are more hardware diffences to consider for the resolution like RAM bandwidth and funnily enough DF tested Hitman back in the day with their RDNA1 GPU test. What DF has done is basically taking two different GPU (wide & slow, fast & narrow) and matched their TF number. Then they tested games and Hitman was one of those running better on a GPU with more CU. So even if we ignore RAM bandwidth, your number is probably off, because there are indeed engines/games that prefer one over the other.
that's not what I meant. I meant to take full advantage of Sony's ssd and performance profiling on the gpu, games would need specific engineering like Demon's souls with its 3 second load times.
That's the case for PC and Xbox as well. I don't think any IO is pushed to its limit and most of the changes will benefit all consoles across the board. Sure there is some custom stuff in those consoles and developers need to code for that, but judging loading is not as easy as looking at SSD bandwith and calling it a day. I feel sometimes the customization of the PS5 are exaggerated. Don't get me wrong, those are cool, but it's not like you need a handbook to even grasp how to use the IO and write insane lengthy code to utilize the GPU or SSD. In fact the PS5 is easy to develop for and that's because it's very similar to a PC.
Maybe it's some sort of oversight or something that will be patched and improved later on, just like Dirt 5 did on Series X. Would have wanted DFs comment/explanation on that performance gap, since they called out Series X underperformance on Dirt 5 and called it a "bug", but now are just basically saying "well, it's probably just the stronger hardware on XSX.
They are doing this, because the Xbox Series X has many advantages in hardware compared to PS5. Thus it was surprising the PS5 performed better and you can slam them for Dirt5 all you want. But at the end of the day, they were 100% spot on.
I think everyone knows the consoles will be and are close. But I don't think it's false to say "well it's probably the stronger Xbox hardware" at all. I admit I was surprised by the difference and perhabs a patch is coming, but again it's not a Dirt 5 situation in which the LOD of the car looked so bad it could've come from the 360 era. I am glad the bug is solved.
CPU is not involved at all in PS5 I/O for a title perfectly optimized. I doubt this the case of Hitman 3, the loading time are big. Spiderman is more 2 seconds only.
I am not just talking about the I/O. A CPU still needs to do some work in games. Turn10 for example calculated drivatar behaviors during loading in the past. So my point was if another task (CPU related, networking related, ...) takes longer than the task to fill the RAM, then bandwidth doesn't matter nearly as much. Thus we can't judge loading just by SSD bandwidth.
If it takes the game let's say 10 seconds to do all the networking related tasks, then it doesn't matter if the SSD can load all data in 2 or 9 seconds (random numbers). I don't think it's networking, but something is clearly holding back the theoretical bandwidth, because even if you look at Xbox vs Xbox, the S should load faster, if you would just look at the SSD, because it's not loading the same assets and has a smaller RAM pool.
Of course the CPU on PS5 isn't involved in the I/O tasks, but then again there is more to loading than that.