This is still confusing me. It seems like there's still nothing we've learned to suggest that overall X wouldn't make it to higher targets for multiplatform games. It may not be 1:1 lead over PS5 all the time, but all that we know about this hardware now and how devs have managed with spec gaps in the past by comparison is not making it clear why the power advantage for X should be so doubted.
think of it like... a mustang vs. a civic. does one have more power than the other? yes.
if you're on a road that's full of potholes, however, do you ever see that increase in power? probably not. you can only go as fast as the road lets you go safely, unless you're a really good driver.
a big part of sony's argument is that their custom hardware is going to take a lot of those potholes out - this was one of the big motivators behind their very long (and very interesting!) explanation of how their SSD setup works. they've spent a lot of time optimizing data movement speeds, managing decompression, improving data eviction policies, and so on. the theory behind it is that, while you have a lower theoretical top-end, the smooth operation of the hardware means you'll be able to go at a constantly fast speed. by comparison, if the series x is missing some of these features, there might be some games that truly just are faster (the aforementioned "really good drivers"), but it's possible that some games might even run worse. we don't actually know.
there's also some other interesting things of note. for one thing, graphics aren't just a matter of flops. one of the big drivers of pop-in, for example, is memory/hard drive read speed - if you can't replace your models with higher-quality textures at an adequate rate, the player starts to get too close to the low-quality textures (or the non-loaded objects, in extreme cases) and the game looks bad. how much does the significantly faster SSD lead to better graphics by improving texture load times? we have no idea. similarly, the ps5 has an entire audio processing unit that the xbox doesn't. how much workload does this take off of the cpu/gpu? does this lead to better performance? we have no idea - this would largely depend on how much time/effort developers are putting into audio for modern games. (for me personally, I'm incredibly intrigued by the audio fidelity sony was talking about, and genuinely do think it could be the true "leap" of this generation if it pans out)
the point is that, until we get actual games running cross-platform that we can compare directly, we don't know how the hardware truly compares. developers have been saying for a long time that numbers on a box with frequencies and teraflops aren't all that matters for hardware performance. solely comparing cpu/gpu theoretical top ends is like, the tim allen version of thinking about hardware - it's mostly just grunting and shouting about "more power". we'll see soon enough what things look like for sure.