The difference between having separate hardware (NVIDIA's tensor cores) and having hardware support inside the shader cores (XSX approach) is that in the former scenario, you don't have to compromise on GPU compute power for native rendering in order to support a DLSS-like feature, while in XSX case, you need to share resources between native rendering and the upscaling AI algorithm. For example: the RTX2060 has 57 TOPS of tensor core performance. If we assume that we need 50 TOPS to do upscaling from 1080p to 4K, then it can do that on the tensor cores, and use its full 6 TFLOPS of ALU power to render the native 1080p image. The XSX, on the other hand, needs to apply 50 TOPS of shader compute power out of its 97 TOPS total, or roughly half of its GPU, to the DLSS-like algorithm, leaving only half of its GPU for native 1080p rendering. If it takes half the GPU to perform the DLSS algorithm, you lose quite a bit of the potential benefit.
As
Elfotografoalocado mentioned, we don't know if the Tensor cores need anywhere near 50 TOPS for DLSS, and we don't know if the tensor cores can run at full performance at the same time as the ALUs and RT cores are running at full performance. If they can, then the RTX support for DLSS is inevitably stronger than what XSX (and PS5) have. If not, then it is probably a better strategy still, but XSX would see better relative performance.
Oh, very interesting! Looks like RT and Tensor combined are only about 20% of the GPU die size, so that in itself shouldn't be the major issue with including that hardware I think.