Game graphics have entered a new and very dynamic stage of computer science problem-solving. The first problem is increasing resolution on the cheap, computationally-speaking, because 4K isn't worth it in terms of computational resources, it just doesn't look that much better than 1440p or perhaps even 1080p to be worth calculating it in the traditional manner. So things like temporal upscaljng, CBR, DLSS are all about coming up with cleverer, quicker ways to extrapolate a larger image based on the image you have already calculated at great computational expense. It's a great compsci challenge and the field seems to be evolving very quickly. Guerrilla woke people up with their temporal upscaling for the 60fps multiplayer in Killzone when the PS4 released.
DLSS is interesting in that it essentially takes advantage of offline computation. In the same way that pre-baked lighting gives you far better lighting than you could do in real-time, DLSS takes advantage of having the tensor cores "trained" by running the game over and over at high res on a powerful PC. The tensor cores can extrapolate a high res image because that training teaches them what to expect.
I wonder how much, like pre-baked lighting, you lose flexibility on DLSS. Pre-baked lighting can't light objects that weren't in the original image (moving NPCs for example), and I wonder if DLSS' visual flaws are due to situations that the tensor cores haven't been trained on. Perhaps rare and inherently unpredictable visual situations (rain, dynamic weather, dynamic and destructible environments) give rise to DLSS hiccuping. What are the limits on how much they can be trained?
Another big area of comp sci innovation is obviously ray-tracing, denoising, and so on.
Ultimately the more dynamic and flexible the compsci solution, the better for devs, because it frees them and makes things cheaper. Pre-baked lighting is expensive and time-consuming in terms of the production process, and I'd imagine that training machine-learning cores might be as well. Device-specific solutions, like Nvidia-only solutions, are less likely to be embraced widely if they add to production time and cost.
But there's great progress on this being made every day, and the new consoles have a good amount of computational power to go around, so there is plenty of scope for smart programmers to come up with good solutions that don't rely on tensor cores specifically.