I honestly can't even recognize ray tracing in games yet. I don't know what it's supposed to look like and I don't know what benefit it's supposed to provide me. It's probably because 99% of the games I'm playing on my PS5 are PS4 up-ports, but I just don't really know what ray tracing is supposed to actually do.
Imagine you have a light in an environment.
99.9% of the time all the light it's casting on local objects is drawn-in. Hence why 99.9% of lights etc are static and un-interactable and cast fake shadows or none at all.
Ray tracing can make it so that light casts actual light and actually casts realistic shadows, and reflections.
If you want to tell the difference in reflections, look at "Control". you can have reflections in transparent objects and you can reflect dynamic objects from off-screen.
Our two main solutions for reflections right now are screenspace, where objects disappear if not being rendered on-screen which causes lots of artifacts and obvious issues like halos around characters since they're occluding objects. Or cubemaps which vary highly and are static since they're decided during rendering (if i remember correctly). So sometimes you have moments where cubemaps load in over-top of screenspace to "cover" for the lack of vision of the dynamic objects from screenspace. So sometimes you can see exactly where a cubemap starts taking over for screenspace reflections.
Here is a video that gives an example of this kind of occluded screenspace artifacting at the extreme. Battlefield 5. Notice how everyone occluded by the gun or objects just disappears. Because there isn't a good solution for having some kind of static fall-back like a cubemap take over or soften the visuals. once it's not on-screen, it falls apart. Dynamic objects usually only render when on-screen and we've just layered tacked-on solutions to solve this kind of problem where RTX just handles that because it's a true-to-life dynamic system
View: https://youtu.be/cld6c1ALw80?t=35
If you want an example of where raytracing can take us that current tech cannot. Imagine playing a scene in a subway tunnel that has one of those corner mirrors that show you who/what is coming around the corner. in an RTX world we could accurately see through the mirror who and what is coming down that hall. currently, there is no way to do that... MAYBE with the fake-room rendering technique from generations past, but that's a lot of extra overhead on its own and isn't a true 1-1 reflection. we can trust lights to do what they're supposed to do, and artists don't have to do another pass to fake all the lighting everytime the assets change or the design of something changes. They can just change whatever and the lighting physics works as its supposed to. No more artifacts from all the many different technologies we use to create a fully lit scene.
Every effect of light and shadow from Ambient occlusion, shadows, bounce and global illumination, reflections. All of it can be replaced by RTX and lighting artists will go from "faking it" to more like how a movie lighting person would. Place the lights you want and trust the physics rather than just painting it in and finding clever ways to fake stuff.
I understand people's complaint about "sacrificing performance", but to throw the baby out with the bathwater is stupid. The power will catch up as the tech itself gets more efficient and our hardware becomes more efficient. It will become the de-facto way EVERYTHING to do with light and shadow is handled by games in short order. This argument is especially annoying right now because spending a bit of time watching a comparison video or understanding what RTX actually does should demonstrate its value on its own. You get true, dynamic lighting and shadow and reflection even if objects are occluded as long as they're rendered in the scene without painting in fake lighting or trying to calculate object creases and such. It's even resolution and object scalable for performance (as seen in a lot of console rtx games).