They are using virtual texturing and apparently parsing geometry data as a texture to reduce the footprint, and also applying virtual texturing there.
If anything this reduces the amount of data you need to load and keep in the memory compared to the traditional approach of loading the entire texture set and all different models lods.
Games have already been using virtual texturing, partially resident textures, and on-demand loading of baked LoDs.
In any virtualised system you're effectively trading memory footprint against bandwidth. For a given set of data, the overall net data size doesn't get smaller just because you're virtualising it. The amount in RAM can be smaller, and that's a function of how quickly you can get data in an out of RAM. So there is a relationship there with storage speed.
For this system vs 'before' or the PS4/XB1 gen? The net data size will be vastly bigger I think. Yes, you don't need a normal map to compensate for missing mesh geometry anymore. But the vertex data is much larger, they're using multiple high resolution texture layers per model, and you can have many more assets per scene now that they have a renderer to handle it.
How that translates to necessary RAM? We know the amount of data needed across that demo exceeds what can be stored at once in RAM, at least if you believe it when they say they need to stream data (and some notable amount more vs 'before'). What we don't know is how much data needs to be accessed at what rate - or, how much RAM you would need for the same result with different levels of storage performance. It's possible it all, always, fits neatly into less-than-PS5 levels of memory and storage performance. It's possible it doesn't, or not at every point in that demo. We might have an idea, for this demo, when the tech talk hits.