For me when watching the demo I was like "ok bounced light i have seen before without screen-space probs" - but I HAVE NEVER SEEN THIS GEO BEFORE. For me it is soooooooo much more impressive than the lighting presentation or anything else at all.
I am really curious about the exact details of how they break down meshes (perhaps like Ubisoft did for Ass Creed Unity? Or Geometry texture thingies??), what meshes can be done this way, and how it can be extended. Gow it scales in performance in the end here, if it is really so darn compute heavy... then we will see some unique scaling here between GPUs.
Afaik there are two approaches described in literature, that have never been used in production before.
One is to transform and encode the topology information into a texture, the other one breaks it down into sort-of meshlets and pre-compute progressive LODs automatically (I think they're called progressive buffers or something like that).
My bet would be on the first approach for a variety of reason (also because is the craziest one).
The way the encoding works is so that lower mips of the texture correspond to a similar topology but less detailed.
This maps almost too perfectly onto the virtual texturing scheme and sampler feedbacks, to not be the approach used.
If you're generating too much geometry in a certain area, drop a mip level in that section, if too little, stream in a higher mip.
This implicit parametrisation should also be easier on the disk size and highly compressible with standard texture techniques, which could be how they can avoid shipping 10 terabytes games to the user lol.