This norm doesn't mean that they would leave behind potential performance.
Depends on how they feel about RT in particular for example.
You also can't have it both ways. Either consoles are setting a baseline and Turing won't struggle with that or it will be outdated sooner than later.
Console level settings isn't something which is the norm for PC gaming so while consoles will provide an RT baseline which even 2060 will likely deal with PC versions of similar games will inevitably target Ampere and RDNA2 top end with their RT implementations - meaning more and better RT usage on "high" and "ultra" settings.
RT is a good tech and if it being available on consoles and every GPU tier in the future doesn't conceive you that it will have broad adoption asap. Than no idea why DLSS being a just a good tech, without a very broad user base makes you think it will have an even faster adoption rate.
Cost of implementation is wildly different.
DLSS is kinda similar to adding SMAA T2x - you need motion vectors (which your game are likely generating anyway for TAA) and that's pretty much it. For such a small effort you're giving a majority of PC GPU userbase (in a sense that NV controls 70-80% of the market and these percentages will eventually have something which support DLSS) some insane performance benefits with small to zero IQ cost.
Adding RT can affect everything starting with basic game design (lighting has an effect on this) through assets creation (optimal BVH generation may require assets refactoring) and obviously renderer. It's some orders of magnitude higher cost of implementation than adding DLSS as a drop-in TAA replacement.
So while consoles supporting RT will obviously mean that it will be comparatively widespread this next gen, the fact that they don't support DLSS doesn't mean much for it's adoption rates in PC releases.