Not wishing to draw comparisons with any existing hardware past, present or future, Cerny presents an intriguing hypothetical scenario - a 36 CU graphics core running at 1GHz up against a notional 48 CU part running at 750MHz. Both deliver 4.6TF of compute performance, but Cerny says that the gaming experience would not be the same.
"Performance is noticeably different, because 'teraflops' is defined as the computational capability of the vector ALU. That's just one part of the GPU, there are a lot of other units - and those other units all run faster when the GPU frequency is higher. At 33 per cent higher frequency, rasterisation goes 33 per cent faster, processing the command buffer goes that much faster, the L1 and L2 caches have that much higher bandwidth, and so on," Cerny explains in his presentation.
"About the only downside is that system memory is 33 per cent further away in terms of cycles, but the large number of benefits more than counterbalance that. As a friend of mine says, a rising tide lifts all boats," explains Cerny. "Also, it's easier to fully use 36 CUs in parallel than it is to fully use 48 CUs - when triangles are small, it's much harder to fill all those CUs with useful work."
Sony's pitch is essentially this: a smaller GPU can be a more nimble, more agile GPU, the inference being that PS5's graphics core should be able to deliver performance higher than you may expect from a TFLOPs number that doesn't accurately encompass the capabilities of all parts of the GPU. Developers work to the power limits of the SoC, their workloads affecting frequencies on the fly - but it's those factors that impact the clock speeds, not ambient temperatures.