Expectations are a funny thing. I mean, whenever we get a new landmark title from the best studios in the world, the window shifts a bit -- people do start expecting better animations, or better visuals. In essence, this expectation and the ever-shifting window of game prices has made it possible to stay somewhat in sync with expectations; these days you may 80, or 100 dollars -- all said and done -- for your game, and that's on the lower end of additional costs. Technology moving forward does help developers to meet expectations: in some ways, it is easier than ever to do things with little work that would have taken an insane amount of work in an earlier time with far worse results. We tend to gloss over these things, but as any developer familiar with Unity or UE can attest, it's not all that hard to go down a winding rabbit hole of 'cool' tech you can toss into your game to seemingly increase quality without understanding much of what is going on (and that's, often, by design).
But, turn your attention for a second to the descriptive language used by Epic to refer to a system like Nanite. They aren't being particularly careful with their words, and they're being even direct about reasonable expectations. When you discuss something like micropolygons, one-to-one mappings between a polygon and a pixel, cutting a mesh up through virtualization or streaming in that data through advanced SSDs, that's one thing. But that's not what people are going to hear or focus on: they'll instead focus on "infinite detail." They won't think about advanced tesselation or 8k texture mapping according to distance from a camera, they'll just think "Oh, so you mean I can see million and millions of polygons even if they're far away! Cool!" They're going to misunderstand the technology, and what solutions are actually being offered by Epic to developers here. When you show people ridiculously detailed rocks because those rocks were literally scanned into a computer, cleaned up by AI, and provided to people through an otherwise expensive service, they don't think about how those rocks are low-hanging fruit at the end of the day. Because, you know, we don't have like... alien artifice to just scan in for our completely unique ideas. They just think, oh, shit, look at all that infinite rock detail!
And it's not their fault. They're asked to interpret a demo with language that is difficult for people who know what they're talking about to parse, nevermind a hardcore audience of players who have no idea what an octree is (if it isn't a woody cthuthlu style monster, that is).
So it's natural for new technology to push people into believing that more is achievable. And better engine technology is a necessary condition for better games, but it's not necessary and sufficient for them -- people still make this stuff, and that takes a lot of different forms. If you have some super low poly model for instance, I don't think there's an inherent value to tessellating that thing into micropolygons other than that is how the system works -- because it's not going to produce detail that doesn't exist (assuming it's not expected for that kind of mesh). It's just gonna be an engine that renders stuff according to one paradigm.
These demos always inflate expectations, and when you see the first actual game using the technology, you're not going to get that quality. Hell, Fortnite alone will prove the case of this once Epic converts it over to UE5; it will look better, yes, but that will not be because suddenly there's infinite levels of detail coming in from scanned assets; it will be because the GI solution is better and because LOD will not be noticeable. Other than that? It's gonna be Fortnite. I feel for developers who, right now, see the demo and feel their heart sink, but I am sure that when games start coming out on the thing, the only devs that are going to raise expectations are going to be the same ones who are doing that now on proprietary tech or UE4.