This is kind of a circular argument especially when we are talking about multi platform games. You couldn't program use of GPGPU into your game for obvious reasons if you are designing for Xbox and PS4. So it would also stand to reason that we would see a significant difference in exclusive titles which is what we do see. Titles like DriveClub used the extra power in various ways, we also saw Killzone Shadowfall and Infamous which used it to perform sound reflections. I think the point I'm making, even with Claybook, is that the extra performance doesn't "have" to be allocated visuals so that it can scale down to a lower system. I do think the CPUs this time around are much better but we also have 7 years to go with the hardware and whatever baseline we establish upfront.
We went into the generation with a low baseline for CPU that really began to hurt towards the end of this generation, we will be starting a whole new 7 year period with a 4Tflop GPU baseline.
edit: actually as I'm thinking about it we are essentially talking an exact replay of how it worked last gen. Games were designed for the Xbox base system and scaled up, not the other way around. We already have an example with the current gen where if we have similar CPU but different GPUs that most will simply be resolution difference and maybe a frame rate increase with some other added effects and call it a day. We see this already with the Xbox One OG up to the X1X being 5x more powerful.
This generation we have examples of the jaguar CPU being the "baseline" for core game system development. What are your examples of the xbox GPU holding back core game systems? All it did was hold back visuals on XB1 as resolutions and various graphical effects needed to be toned down or removed in order for games to run on the system.
Visual assets assets and systems are created for max settings, then downscaled for weaker hardware.
Again I'm arguing that non-scalable elements don't get designed into most games because of the very fact that they are not scalable. Most elements being scaleable is strictly because the platform design dictates that to be so, not because the elements are always going to be scalable. You're taking the fact that things are designed to be GPU scalable as evidence that most anything you want to run on GPU will be.
I think I'll just link this UBIsoft test on GPGPU cloth simulation:
Ubisoft's GDC 2014 document provides insights into the performance of the Playstation 4 and Xbox One's CPU and GPU and GPGPU compute information from AMD and other sources.
www.redgamingtech.com
We can see there are distinct workloads that benefit from GPGPU, but this also means we cannot plan a gameplay mechanic around the higher level of GPGPU compute of this cloth simulation. The obvious reason being because of the vast gap in those workloads. Therefore any use of cloth simulation would have to NOT affect gameplay or design and be relegated to be visual and scalable only.
The only reason these effects are, as you put it, sliders on a scale is because developers know up front they cannot rely on the higher levels of performance because of the vast disparity between systems and design their gameplay elements around the limitations of their lower baseline systems.
Isn't this just revealing the fallacy of your arguement? The article you posted explicitly mentions porting the cloth physics task to GPU in part because CPUs since were too weak this gen and would handle such a task inefficiently.
Cloth physics are a superficial visual element, as such it's most prudent to use a GPU to handle them, especially now with how powerful GPUs have become. The result is you have a feature that can be scaled down or removed without any impact on core gameplay. Developers arent designing cloth physics around a weak gpu and scaling up. They are designing this cloth physics system for the strongest hardware, then toning it down as necessary for weaker hardware.
Your reasoning is circular. Cloth physics are a perfect candidate to offload to GPUs BECAUSE they aren't something devs typically intended to impact core systems anyway. As such, When we don't see cloth physics effect gameplay, its not because of limitations of weaker GPUs in the hardware target - it's mostly likely because no one wants to make cloth physics a core gameplay system.
A dev that wants cloth physics to be a part of core gameplay system (if such a developer exists) is limited by this gens weak CPUs, since that's where it makes the most sense to handle physics that have a game-state effect.
Offloading such tasks to GPUs means that they would inherently be scalable, and either wont impact core systems or the impact on core systems will be scalable in nature. The improved CPUs on next gen hardware will allow such core physics systems to move back to CPU if necessary.
I see we aren't going to make headway on this so I will just draw your attention to the bolded, explain the core difference between our arguments, and leave it at that:
I'm arguing because GPUs have wide variances in performance, gameplay elements are not run on the GPU.
You're arguing gameplay elements are not run on the GPUso there is no reason to worry about wide variances in performance.
While both are true, my proof is that when exclusives are made, gameplay elements are offloaded to the GPU which happens in a uniform closed system or in the case of the PS3, gpu elements are offloaded to the CPU. In fact, even in this generation we've seen things such as AI and physics offloaded to the GPU. It's not a rarity at all.
if viewing the XSS/XSX through the view of multiplatforwhich are designed with wide variance in mind then no it wouldn't matter as the game wouldn't be designed like that in the first place.
if viewed through the lens of platform exclusives games, then a closed system without performance variance will make better use of its guaranteed resources than a game designed to scale up and down based on performance variances.
How prevalent we will see either of these scenarios is up for debate, maybe only one or two PS5 exclusives take advantage of the higher baseline for gameplay elements, maybe they all do, maybe none of them do, and I guess we won't know until the generation and games release.
I'm glad to see the argument slowly shift away from the notion that the Lockhart GPU will be generally informing baseline core systems or that devs design GPU functions for the worst hardware then scale up. The reality is that with multiple hardware targets, tasks that are scalable and nonessential to core to gameplay systems, are offloaded to the component that has the widest variance between player systems - the GPU. This allows devs to scale down without impacting gameplay.
It's possible that some PS5 exclusive(s) will utilize the GPU in some un-scalable fashion, and depending on how much frametime this function requires, this system wouldn't be replicable in Microsoft's ecosystem, but this is highly unlikely and stands to be incredibly rare if it ever happens - it's certainly wont be prevalent enough to dictate hardware strategy.