• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.
  • We have made minor adjustments to how the search bar works on ResetEra. You can read about the changes here.

Uhtred

Alt Account
Banned
May 4, 2020
1,340
Alex has a Crysis box to the right of him.

Respect.

Edit: lol, everyone noticed.
 

Raide

Banned
Oct 31, 2017
16,596
Excellent video that goes into detail some of the main UE5 features. The nanite stuff seems like the game-changer but also seems to be very costly on the performance front. A GPU with plenty of grunt seems to be the best way to get things done, since that is what is doing all the lighting and rendering work here.

I would still like to see some actual numbers in terms of CPU/GPU/RAM etc but its great to show people what UE5 could potentially be doing for next-gen systems.
 
OP
OP
III-V

III-V

Member
Oct 25, 2017
18,827
Just finished watching. Really great work Alex. Thanks DF. I wonder if perf would improve if 8K textures were not being used, maybe 4K textures, but you still get the benefits of no discrete LOD changes, etc..
 
TLDW

ILikeFeet

DF Deet Master
Banned
Oct 25, 2017
61,987
summary if no one wants to do it
  • Lumen is not ray tracing, it's using another form of tracing
  • lumen also has specular reflections
  • large objects are traced through voxels
  • medium objects are signed distance fields
  • small objects are screen space (similar to Gears 5 on Seriex X)
  • you can see screen space artifacts in the demo
  • uses temporal accumulation like RT, so there's a latency to lighting
  • micro-polygon rendering is primarily used in offline rendering like film
  • nanite uses a high resolution tiling normal map for fine details to help conserve vram through virtual texturing
  • nanite scales the model complexity by how many pixels it takes up
  • micro sized objects are shadowed with SS shadows and combined with a virtualized shadow map
  • shadow map resolution is aligned with screen resolution
  • shadows are filtered to create a penumbra
  • unknown if nanite applies to animated objects like foliage or hair, or characters
  • demo is dynamic res (mostly 1440p) at 30fps
  • resolution scaling is more expensive with this technique
  • pixel-sized triangles is pretty inefficient so how is Epic avoiding wasteful rendering?
 

Lukas Taves

Banned
Oct 28, 2017
5,713
Brazil
summary if no one wants to do it
  • Lumen is not ray tracing, it's using another form of tracing
  • lumen also has specular reflections
  • large objects are traced through voxels
  • medium objects are signed distance fields
  • small objects are screen space (similar to Gears 5 on Seriex X)
  • you can see screen space artifacts in the demo
  • uses temporal accumulation like RT, so there's a latency to lighting
  • micro-polygon rendering is primarily used in offline rendering like film
  • nanite uses a high resolution tiling normal map for fine details to help conserve vram through virtual texturing
  • nanite scales the model complexity by how many pixels it takes up
  • micro sized objects are shadowed with SS shadows and combined with a virtualized shadow map
  • shadow map resolution is aligned with screen resolution
  • shadows are filtered to create a penumbra
  • unknown if nanite applies to animated objects like foliage or hair, or characters
  • demo is dynamic res (mostly 1440p) at 30fps
  • resolution scaling is more expensive with this technique
  • pixel-sized triangles is pretty inefficient so how is Epic avoiding wasteful rendering?
I think not. You can see some polygon edges on the main char if you zoom in. Unlike the rest of the scene where everything is super dense.
 

Bad_Boy

Banned
Oct 25, 2017
3,624
Crazy how fast these presentations are made. Like it would take me week to prepare a school presentation on the tech behind this demo lol
 

gofreak

Member
Oct 26, 2017
7,736
  • pixel-sized triangles is pretty inefficient so how is Epic avoiding wasteful rendering?

They seem to be skipping the GPU rasteriser altogether, at least some of the time. I guess their compute rasteriser makes this a lot more efficient than naively dumping single-pixel triangles at the GPU would. Maybe they're splatting triangles!
 

Neuromancer

Member
Oct 27, 2017
3,760
Baltimore
I have a dumb question. If devs can bring in super high resolution models etc into the engine, won't game sizes be ridiculous? Or does the engine do the work to slim them down?
 

imapioneer

Member
Oct 27, 2017
1,057
10/10 video Alex. The amount of positive reception the ue5 video has gotten is absolutely insane.
 

Corralx

Member
Aug 23, 2018
1,176
London, UK

Elios83

Member
Oct 28, 2017
976
summary if no one wants to do it
  • Lumen is not ray tracing, it's using another form of tracing
  • lumen also has specular reflections
  • large objects are traced through voxels
  • medium objects are signed distance fields
  • small objects are screen space (similar to Gears 5 on Seriex X)
  • you can see screen space artifacts in the demo
  • uses temporal accumulation like RT, so there's a latency to lighting
  • micro-polygon rendering is primarily used in offline rendering like film
  • nanite uses a high resolution tiling normal map for fine details to help conserve vram through virtual texturing
  • nanite scales the model complexity by how many pixels it takes up
  • micro sized objects are shadowed with SS shadows and combined with a virtualized shadow map
  • shadow map resolution is aligned with screen resolution
  • shadows are filtered to create a penumbra
  • unknown if nanite applies to animated objects like foliage or hair, or characters
  • demo is dynamic res (mostly 1440p) at 30fps
  • resolution scaling is more expensive with this technique
  • pixel-sized triangles is pretty inefficient so how is Epic avoiding wasteful rendering?

Thanks for the summary, I wonder how this tecnique can be integrated and possibly boosted with the specific hardware implementations of AMD and nVidia (the intersection engines and RT cores respectively). Seems like basically they have developed their own software solution to address the same problem that ray tracing tries to address (global illumination, off screen multi bounces reflections and such).
 

Raide

Banned
Oct 31, 2017
16,596
I have a dumb question. If devs can bring in super high resolution models etc into the engine, won't game sizes be ridiculous? Or does the engine do the work to slim them down?
Normally games would have multiple versions of the same model at different LOD's, letting the engine swap them in and out. The idea for UE5 is that its only the high version and the system does the work.
 

Andromeda

Member
Oct 27, 2017
4,846
They're not rasterising them at all. It's all compute based. The render pipe is asleep.



It doesn't. For now it's static geometry only, but they have plan to extend the system ofc.
That's not what they say, apparently, they are doing both depending which one is faster:
We can't beat hardware rasterisers in all cases though so we'll use hardware when we've determined it's the faster path. On PlayStation 5 we use primitive shaders for that path which is considerably faster than using the old pipeline we had before with vertex shaders."
 

xem

Member
Oct 31, 2017
2,043
I want an in-depth video of how SSDs are helping stream/load in portions of the world so quickly. yummy
 

Corralx

Member
Aug 23, 2018
1,176
London, UK
That's not what they say, apparently, they are doing both depending which one is faster:

Yes. I theorised in another thread of this works.
You cannot beat the rasteriser in general, but you can when you have that crazy 1 tris per pixel mapping.
So what happens is:
- if the tris is small enough, they send it down the compute pipe and do a sort of software rendering instead fo rasterising it
- if the tris is bigger than a threshold (probably a 2x2 fragment block) they send it down the render pipe for standard rasterisation as that's going to be faster

This way you can mix and match ultra high detail meshes and "standard" detail meshes together, without having to worry about how big your triangles are going to be on screen.
My understanding is, at least in this demo where all meshes have an absolutely insane amount of geometry, the vast majority of the geometry is software rendered, not using primitive shaders.
 

Dictator

Digital Foundry
Verified
Oct 26, 2017
4,931
Berlin, 'SCHLAND
Yes. I theorised in another thread of this works.
You cannot beat the rasteriser in general, but you can when you have that crazy 1 tris per pixel mapping.
So what happens is:
- if the tris is small enough, they send it down the compute pipe and do a sort of software rendering instead fo rasterising it
- if the tris is bigger than a threshold (probably a 2x2 fragment block) they send it down the render pipe for standard rasterisation as that's going to be faster

This way you can mix and match ultra high detail meshes and "standard" detail meshes together, without having to worry about how big your triangles are going to be on screen.
My understanding is, at least in this demo where all meshes have an absolutely insane amount of geometry, the vast majority of the geometry is software rendered, not using primitive shaders.
Great post - I really think this will be the case in the end.

BTW, I made this video before Epic's team responded to us where it had a very different structure. We never knew if they were going to get back to us with the questions we asked, so it was going to be a "what can we see from this video" kind of thing where I look at the artefacts and speculate. So the video structure in the end may sound a bit different then the some of the passages.

I am so happy Epic agreed to answer my quesitons in the end instead of my having to do guess work based upon a video ONLY.
excellent video and analysis Dictator, the Lumen details especially enlightening :D

Thanks
 

Corralx

Member
Aug 23, 2018
1,176
London, UK
Great post - I really think this will be the case in the end.

BTW, I made this video before Epic's team responded to us where it had a very different structure. We never knew if they were going to get back to us with the questions we asked, so it was going to be a "what can we see from this video" kind of thing where I look at the artefacts and speculate. So the video structure in the end may sound a bit different then the some of the passages.

I am so happy Epic agreed to answer my quesitons in the end instead of my having to do guess work based upon a video ONLY.

Yeah I guessed as much, as the article had a lot more info.

This technology is the most I've been excited in years. Raytracing doesn't even come close.
I theorised years ago that an ultra-specialised compute-based micro-polygon software renderer similar to REYES could beat the standard pipe, and some early benchmarks showed it was possible at least in some cases.
What I was missing is how to process the insane amount of geometry you need to achieve 1:1 tris to pixel mapping in most cases.
The details are not out yet, but I see the idea of using image-based topology-preserving encoding of the meshes and stream them just like virtual textures (especially now that we have sampler feedbacks) is becoming increasingly likely.
This tech is abso-fuckin-lutely bonkers.
All I want is to be done with work to start writing some code lol.
 

Dictator

Digital Foundry
Verified
Oct 26, 2017
4,931
Berlin, 'SCHLAND
Yeah I guessed as much, as the article had a lot more info.

This technology is the most I've been excited in years. Raytracing doesn't even come close.
I theorised years ago that an ultra-specialised compute-based micro-polygon software renderer similar to REYES could beat the standard pipe, and some early benchmarks showed it was possible at least in some cases.
What I was missing is how to process the insane amount of geometry you need to achieve 1:1 tris to pixel mapping in most cases.
The details are not out yet, but I see the idea of using image-based topology-preserving encoding of the meshes and stream them just like virtual textures (especially now that we have sampler feedbacks) is becoming increasingly likely.
This tech is abso-fuckin-lutely bonkers.
All I want is to be done with work to start writing some code lol.
For me when watching the demo I was like "ok bounced light i have seen before without screen-space probs" - but I HAVE NEVER SEEN THIS GEO BEFORE. For me it is soooooooo much more impressive than the lighting presentation or anything else at all.

I am really curious about the exact details of how they break down meshes (perhaps like Ubisoft did for Ass Creed Unity? Or Geometry texture thingies??), what meshes can be done this way, and how it can be extended. How it scales in performance in the end here, if it is really so darn compute heavy... then we will see some unique scaling here between GPUs.
 

DanteMenethil

Member
Oct 25, 2017
8,061
Hi, I am a dumb dumb barging in to ask, can Lumen be dumped to tensor cores? It's not ray tracing but it's a derivative right?
 

Thera

Banned
Feb 28, 2019
12,876
France
So Dictator , you finally (maybe?) have your answer about the lack of LOD transition in Setsuna 2 trailer. Maybe it run on UE 5.
Did you ask Epic about it ?
 

Nooblet

Member
Oct 25, 2017
13,637
So Dictator , you finally (maybe?) have your answer about the lack of LOD transition in Setsuna 2 trailer. Maybe it run on UE 5.
Did you ask Epic about it ?
Don't think the engine is available to anyone atm, not even as early access. Plus I doubt a AAA project well under development would be using an alpha/beta ver of an engine.

Plus the game will likely launch better UE5 does.
 

Corralx

Member
Aug 23, 2018
1,176
London, UK
For me when watching the demo I was like "ok bounced light i have seen before without screen-space probs" - but I HAVE NEVER SEEN THIS GEO BEFORE. For me it is soooooooo much more impressive than the lighting presentation or anything else at all.

I am really curious about the exact details of how they break down meshes (perhaps like Ubisoft did for Ass Creed Unity? Or Geometry texture thingies??), what meshes can be done this way, and how it can be extended. Gow it scales in performance in the end here, if it is really so darn compute heavy... then we will see some unique scaling here between GPUs.

Afaik there are two approaches described in literature, that have never been used in production before.
One is to transform and encode the topology information into a texture, the other one breaks it down into sort-of meshlets and pre-compute progressive LODs automatically (I think they're called progressive buffers or something like that).
My bet would be on the first approach for a variety of reason (also because is the craziest one).
The way the encoding works is so that lower mips of the texture correspond to a similar topology but less detailed.
This maps almost too perfectly onto the virtual texturing scheme and sampler feedbacks, to not be the approach used.
If you're generating too much geometry in a certain area, drop a mip level in that section, if too little, stream in a higher mip.
This implicit parametrisation should also be easier on the disk size and highly compressible with standard texture techniques, which could be how they can avoid shipping 10 terabytes games to the user lol.
 
Oct 25, 2017
7,506
Awesome video, a lot of it flew over my head, but to see someone who is obviously in tune with these things excited for it made me excited.
 

gofreak

Member
Oct 26, 2017
7,736
Seems like Epic doesn't want to go into specifics when it comes to the SSD though. They just say you'll need one.

It's kind of a hard thing to answer in a general case though. It would depend on the individual game (or demo) and system memory, among other things, to determine what you 'need' for a given fidelity or level of temporal stability. Some UE5 games using this system may need relatively little throughput from storage (e.g. if required data is small enough to be more often resident in memory). Some UE5 games could theoretically need either more memory or more storage throughput than even the best SSDs offer, to achieve optimal fidelity all the time, at a given target res.

While general answers won't be possible, when they do the tech talk we might get a better idea of how gpu compute/memory/storage-bandwidth interact. And maybe even for this demo we might get an idea of what was specifically needed.
 

Cuyeo

Member
Oct 27, 2017
33
I know this is nitpicking, but I'm still annoyed by the artifacts around moving objects caused by the temporal AA solution, this comes with any temporal AA method applied though. Higher frame rates definitely help.
 

P40L0

Member
Jun 12, 2018
7,630
Italy
Multiplatform engine doesn't mean the exact same quality on all platforms, or are you expecting an iPad or a Switch to be able to do the same as the demo yesterday?
When talking about SSD speed differences, I won't expect big real life noticeable changes compared to what we saw.
Same with a 10TFLOPs vs 12TFLOPS difference between PS5 and XSX, I would just expect a bit higher resolution and/or framerate on the second.

Big downgrades will be noticeable on current gen consoles, mainly bottlenecked by HDD and ancient CPUs.
 

Alexandros

Member
Oct 26, 2017
17,814
The video was super informative, especially the analysis after the 8-minute mark with the explanation on how LOD works in current games. Easy to understand and educational even for a layman such as myself.
 

Deleted member 10847

User requested account closure
Banned
Oct 27, 2017
1,343
When talking about SSD speed differences, I won't expect big real life noticeable changes compared to what we saw.
Same with a 10TFLOPs vs 12TFLOPS difference between PS5 and XSX, I would just expect a bit higher resolution and/or framerate on the second.

Big downgrades will be noticeable on current gen consoles, mainly bottlenecked by HDD and ancient CPUs.
You are wasting your time.