I am not sure why he thinks I am talking about run time procedural texturing, as I am not wanting to talk about that? So I am not sure why he thinks I am? Perhaps I wrote it in a way where he thinks it is?
My point is (I guess no one understands it still with the way I wrote it here on ResetEra, oddly enough it seems it was understood on Beyond3D?):
Imagine you have 2 ways to prepare a texture. One in which it is a completely bespoke 4k texture baked out from a high res model. Then you have another way where it has a lower base resolution (1024) and its detail is then made up by stamped or instanced trims, decals, shared repeating detail textures. The latter is the direction modern game dev has gone, and most especially modern open world development since you are sharing much of the detail layering between objects.
The first requires a lot more space on the disk and indeed in VRAM. It represents the idea of every object being wholly bespoke, unique, and static in memory as that asset. The other type of texturing system is smaller in VRAM and on Disk, and it also requires less artist time since you are not remaking an entire asset to create variation, rather you are changing decals, trims, base colour, etc. at run time in editor (as was shown to DF by Cloud Imperium Games, id, and I am very sure many other game studios have switched over to this method of detail creation). I called the later procedural, as I understand it as that. Not procedural as in "the GPU is generating textures".
The idea that an open world game would want fully bespoke completely unique details enabled by something like, textures, and therefore need to swap out huge swathes of (texture) data as you merely turn the camera about is anti-thetical to how asset reuse (trims, decals, tiling detail textures) is integral to making modern games with large scales where they cannot spend the time to make fully bespoke textures. It is also very confusing to imagine you would need to need to flush such large amounts GPU memory when turning the camera in such a game (even one with very unique textures per asset), considering you are only going to be seeing mip 0 very close to the camera. You would be swapping only a number of extremely large textures in reality, and further mid distance and far detail would perhaps not be swapping at all, or would be swapping mid chain low res mips. Why? To prevent aliasing, of course, which is why we use mipmaps as well.