Okay, I was just confused by the "at the same time as PS5 can calculate 1". So what you meant was "can cast 52 rays in the same time PS5 can cast 36." But I'm not sure that's correct. Since the RT hardware is subject to the same speed as the rest of the GPU, I believe PS5's higher clockspeed means it can cast more than this ratio. A naive calculation would say 44 rays versus 52; I doubt the math is so simple, though. (And see the next segment for evidence of severe obvious gaps in my learning.)
Er, RT scales with resolution, if perf delta is 20% you'd get either/or.
Okay, it seems I fundamentally misunderstand RT cores then. What I meant was that as resolution goes up as a function of CU number, the rays per pixel don't go up even faster. The two values move in lockstep. But you're saying it's a literal tradeoff, that if your resolution is higher, the number of rays/pixel has to go down. I understand that BVH intersection tests aren't the only step to raytracing, but I'd have thought with dedicated hardware for that part, RT would use little of the normal compute hardware. Is this incorrect?
Do we know if the CUs on both PS5 and XSX are the same size? Just something I thought of since Cerny said the CUs on PS5 is 60% bigger than what they were on PS4.
We don't know. There's no image or definitively known size for the PS5 APU. The only block-diagram style render of the XSX chip is unhelpfully tilted and 3D, which introduces some imprecision. But according to that it does seem that RDNA2 is smaller overall than RDNA1, so I wouldn't expect anything much smaller.
As for the Mark Cerny statement about 62% more transistors per CU you referenced. If we adjust the PS4 CU like that, and then shrink it based on estimated transistor density change for TSMC, we get a CU size for PS5 of 1.7mm^2. But this is not a very accurate way to estimate, and that seems too small (15% below RDNA1, even though extra hardware has been added to each?).