I think the answer is pretty straightforward based on what we've heard. It isn't. So who is right? We'll know soon enough. Based on the usual workload of a modern game I expect the GPU to be running at full load and the CPU to be underclocking, since the system has a specific power budget that must not be exceeded.
Assuming the clock relationship is simple, the thing I wonder is, how often that will matter with workloads such as that you mentioned. If you're gpu bound - going hard on the gpu - it means there is some excess of frame time on the cpu side vs the gpu side. As long as the clock behaviour doesn't tip the bound back over to the other side of the system, what the clocks are doing will be transparent wrt frame rate. The system will 'just' be transparently reducing the gpu bound by keeping the gpu clocks high, and increasing the cpu frametime, relative to both going full throttle. As long as the latter doesn't pass the former, it's a net benefit.
The trade off - or where it won't be so transparent for the optimisation engineer - will be in games where the framerate bound is more evenly divided rather than lopsided on either the cpu or gpu side. The question becomes how typical that will be vs games that are either, to a notable degree, cpu or gpu bound.
I would say also, one nice thing about this system would be if it's granular enough to respond dynamically, within a frame, to shifts in the workload. Over a frame the workload will vary on the cpu and gpu. If they fixed the clocks at some lower level on the cpu and/or gpu, the capacity to respond to different bounds would be fixed. That's the alternative we should compare to, that would have been the alternative way to divvy up their power budget, if we fix for cost - a set of lower fixed clocks that fit within the power/thermal envelope.