Okay, thank you.
I'm not trying to argue against this, but rather expand my knowledge, but my understanding was that there was a fairly clear link between resolution and necessary GPU power in that to power twice the pixels you need twice the power.
I am like you, interrogative, and the question could be asked between other platforms. I have no doubt about Lockhart running Xbox Series-based games flawlessly, the problem isn't there for me. More like, with exaggeration, having a 2080Ti but only RetroArch to run with, compared to a 2070 capable of launching any PC games.
When you say "twice the power needed", it's not contradictory with what I said about resolution. To put in another way : when Mark Cerny said that ± 8TF would be needed to push PS4 games to a native 4K, but the One X achieved that with 6TF - did he lied or was he wrong ? Nope, it's because the One X bump games build for a 1.3x TF baseline and the PS4 for 1.8x.
So my question is, what does it imply for devs to have a 4TF baseline rather than 9.2/12, that's all. And I am still convinced that it means restrictions that aren't solved by a linear 2D scaling (resolution) or post-process effects (shadows etc.) but could affect the core design of a game (heights of reliefs, animation complexity, etc.).
For me Lockhart is the answer to three problems for Microsoft :
1) as a brand, avoiding to let the Serie X alone against the worldwide PS5 marketing bulldozer
2) as a publisher, avoiding to burn bridges with their 1060/SATA3/Windows and One/One X/Gamepass customers
3) as a manufacturer, avoiding to offer only a premium product to the current Xbox customers
You can see how the point 2. is also interesting for publishers like EA or Ubisoft, and all those having to harvest to largest possible range of customers with the same game port. And I am not asking all this against Lockhart, but more against the argument "nothing that scaling couldn't solve". But I cannot be affirmative like if I was in gamedev because I am not, just arguing with the few I know about