Warning, huge RT info dump.
Thank you for the explanation.
So if I am understanding this right:
- Rasterized lights be it sun or other sources, be it part of forward or deferred rendering, do not produce shadows- So it's essentially (baked) light map? And so shadows are superimposed as part of raster pipeline by picking from and using an object's corresponding shadow map with its length, LoD, collusion detection and direction in relation to the light source precalculated in rasterization pipeline before they are sent out for rendering.
It's not baked. I mean, it could be baked, but specifically in Metro it's not baked. Sunlight is calculated in real-time in Metro (it has to, the game has a day/night cycle) but it is just a simple rasterized light. Rasterized light doesn't follow physics, it just illuminates areas it should illuminate according to a function, it doesn't know if something is on the way. Think of it as a stupid painter, it paints light on objects in range without knowing if something is on the way. If there is a sphere and there is a light source on its' left, its' left side will be illuminated and the right side will stay dark. But if there is a box between the sphere and the light? Rasterization lighting doesn't care, the sphere's left side will be in full light. So everything in range of the light gets illuminated, no shadows exist. That's where rasterized shadows come in. They are a completely different effect, detached completely from the lighting model, it's a whole other rasterization effect.
- Metro uses the typical rasterization process for direct lights and corresponding shadows and then it utilizes another layer (akin to Reshade mod) of process that encapsulates a few million RT rays (4A games said 3 rays per pixel per frame at 1080p) calculating bounce lights and corresponding shadows. Upon its completion, it is rasterized to generate light and shadow maps in addition to the previous data in the pipeline and rendered on screen. The resultant RT GI products are the interstitial light and shadow maps to cover the imperfection of existing rasterization solution. Presumably, this is the "Hybrid RT solution" folks have been talking about.
Yes, you can call the RT GI a totally separate process. Rays are shot through every pixel on-screen (I this 4A went with 1rpp in the end) and bounce around, at the end they look for a direct line to the sun. If rays find the sun, the pixel gest bounce light, if it doesn't, it doesn't get the bounce light and remains dark. So shadows are a byproduct of the light, just like real shadows. So what it means is that every light that direct lights something is being faked using rasterization and every shadow resulting from that light is a fake rasterization shadow. But light that bounces from objects, which is only based on sunlight, create additional shadows that don't exist in the RT-less version of the game. After the RT process is done, its' lighting data is added to the original image and help create the final image.
It very easy to see it in these screenshots (I've taken them from Dictator's wonderful DF videos):
No RT GI, the only light source is the sunlight outside, no light source inside so the insides and the body have some general unified light so they won't be completely dark. It's flat, it's ugly, it has no "depth" and there are no shadows because no direct light source means nothing to shadow from.
Exactly the same scene, now with RT GI. The outside remains the same (it might look a bit brighter because it's darker inside) but the insides look different. Before the RT GI was calculated, the insides were pitch black because the flat global light that was there before is no longer needed. Now rays from the sun hit the ground outside, bounce and light the insides. Under the body, you can see that all the rays that it blocked form a natural shadow that was never there in the first place because there was no rasterized light source to cast it. You can also see that corners have a bit of AO to them, that's because tight areas are less likely to receive bounce lighting so they get darker. So again, natural AO forms that will only get better with more bounces.
Another good example is this one. Another scene with sunlight outside and uniform global light inside that looks very flat.
Now, let's throw in some RT GI. Lighting isn't flat anymore, light bounces from the outside and creates a nice, deep, reach lighting inside with natural AO and shadows. Just look at those red boults and the shadow they cast, beautiful.
- In your final paragraph, I assume you're talking about path tracing, the holy grail and end point for lighting technology.
Yes, it could be done using path tracing but this specific thing I was talking about could also be done using Photon mapping which is unachievable right now because Photon Mapping and BVH structure don't play nice. Photon mapping needs a kd-tree and kd-trees are very hard to build so as long as BVH is the only traversal structure we have, path tracing it is.
Now, if that is correct then:
- In the video link I posted in parenthesis above, if you go to around 2:33, you'll hear the narrator mention that current GI process is a culmination of a multitude of subsystems, which you've covered.What sub systems in that pipeline can be realistically discarded in favour of better performance while using RT GI? (SSAO would be one for sure)
RT Gi gives bounce light, bounce light shadows and AO. Bounce light shadows aren't really a thing right now, so they are basically new shadows created as a result of the RT GI so they don't replace anything, they are a pure upgrade. AO will be created but it will change based on how many bounces the RT GI does. IMO what Metro does already looks better than any SSAO out there and when hardware gets more powerful, we will get more bounces which will make the GI look better, the AO looks better and the bounce light shadows to look better.
- Is RT GI calculated based upon pre-existing light sources in the world (because that is what it looked when DF did their coverage of RT for Metro Exodous on 2080Ti)? And would the performance saving function then come in the form of "how many times" the light is bounced before generating a decent light and shadow maps (and then de-noising) given you mentioned "you can have every light in the game calculated as an emissive RT light and that light could bounce 4-5 times"?
In a perfect world, every light is emissive and every light bounces multiple times (and on top of that, light does a lot more like reflects, refracts, interacts with atmospheric elements and so on). Minecraft RTX does that, that's the reason it's one of the most intensive RT demos around even though originally Minecraft isn't exactly the highest graphical benchmark. This generation won't be able to handle that outside of games like Minecraft (which runs at 1080p sub-60fps one XSX), but PS6 might and PS7 will for sure.
Because XSX or the 2080ti can't handle that in a game like Metro, all they do is calculate bounce lighting for sunlight and sunlight alone. So direct light from the sun isn't using RT, a flashlight isn't using RT, etc.. But over time developers will get better at using RT, hardware will become more powerful and wonderful things will happen. Actually my assumption is that PS5 Pro and XSX - X will be RT beasts and will try to push more RT in current PS5 games over resolution. So basically instead of resolution machines that the Pro and X were this generation, in the next-gen the Pro consoles might be RT machines considering going over 4K seems to have a bit of a diminishing return.
- Isn't "rasterization" the end point for all 3D (vector based) work load so that it can be rendered on a 2D plane (like TV)?
Rasterization isn't needed for 3D, but it's efficient as hell. Thing is, it's ugly so developers need to come up with more and more rasterization methods to cover up the ugliness or mimic more optical phenomena that accrue naturally with light/RT. Because developers know the real-world lighting targets, the hoops they jump through to hit those targets and how they fail because rasterization just can't simulate light, they worship the idea of RT even if some gamers don't understand its' importance.
- Are hybrid RT GI solutions (like Metro E's) confined to screen space/ viewing frustum?
No, they are not confined to screen space. But I wouldn't call Metro's solution hybrid, it's a real RT GI solution but it's not a full solution because it only covers the sun and it could use more bounces. What I mean by that is that Metro uses zero rasterization for RT GI, it's pure RT, but the solution doesn't go deep enough because of power limitations. A hybrid solution will be Remedy's Control. Remedy uses a rasterization based GI, but uses RT to fix light leaks.
- Would using hybrid RT GI necessitate in lower of resolution to 1080p?
Metro isn't hybrid and it works just fine in 1440p and 4K. Control's solution is hybrid and it's also just fine in 1440p and 4K. The video you saw was very early work by 4A, they still shot 3 rays per pixel, no one does that today. That's why they were confined to 1080p in that video.