• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.

icecold1983

Banned
Nov 3, 2017
4,243
We are going to see some amazing graphics from the next gen consoles, but if they aren't even capable of hitting Control levels of ray tracing, then I think we might end up seeing a generational gap between next gen and high PC really soon after they launch. I agree, a mid-gen refresh will be a must.

Theres still a LOT developers can do to improve rasterization. Even without any RT games developed from the ground up for the new consoles would look far better than anything available today.
 

ken_matthews

Banned
Oct 25, 2017
838
Generational gap in terms of RT? Possibly. But I'd assume for that to happen, games would need to be designed completely around RT rather than rasterization or a mix of both. Otherwise consoles will still be the base that vast majority of games are built around.

Yeah that seems likely for all the cross platform games. But there are a few developers out there that really like to push the limits on PC, like Cryket and 4A Games, Remedy, and CD Projekt. We might end up seeing some crazy raytracing in the high end PC scene, especially with DX12 ultimate and the RTX 3x cards coming out soon.

Theres still a LOT developers can do to improve rasterization. Even without any RT games developed from the ground up for the new consoles would look far better than anything available today.

Of course, there is no doubt about that.
 

ElNerdo

Member
Oct 22, 2018
2,220
What do you think pbr means exactly?

Tensor cores are not involved im ray tracing, you are thinking of the RT core which handle the aabb traversals, ray triangle intersections and reporting of Hits and misses.
Dang, and I even googled to make sure I was using the right term. ):

But still, I wonder how big/small the difference will be between Nvidia and AMD when it comes RT in games.
 
OP
OP
ResetEraVetVIP
Nov 20, 2019
1,861
Who can be stupid enough to compare a CGI render to in game graphics?
First thing is not all CGI is "good", also CGI always gets better just like realtime graphics. Digital Foundry compared FF7R to Advent Children and it made sense. Also saying Keanu Reeve's likeness isn't good in the game isn't wrong, people expecting The best CGI on a current gen console is.
 

arsene_P5

Prophet of Regret
Member
Apr 17, 2020
15,438
Realistically -- are we expecting a DLSS like solution for either console?
I wouldn't expect them to reach DLSS, because RDNA2 uses CU for ML iirc. This means you take away resources. Remains to be seen how good the consoles actually are at ML.

Direct X ML solution:
carcompare.png
 
Last edited:

Micerider

Member
Nov 11, 2017
1,180
Theres still a LOT developers can do to improve rasterization. Even without any RT games developed from the ground up for the new consoles would look far better than anything available today.

Indeed, I remember how GI (in rasterization with the light probes) was touted for Unreal Engine before this gen, but ended up being under-used due to lower specs consoles. RT will probably end up being a preferred option for some effects (reflection) or even used in paralell with rasterization.
 

Deleted member 18161

user requested account closure
Banned
Oct 27, 2017
4,805
Yeah RT alone isn't going to do it. I mean Minecraft with RT looks better, but it is still mincraft with RT.

This can't be said enough. Character models, texture resolution, pbr, shading quality, shadow quality and standard lighting models can all be greatly improved even without touching Ray Tracing the way Metro Exodus, Control and BFV use it. As always the biggest leap between generations for next gen exclusive games will be in the size and density of the environments because developers suddenly have so many more triangles to play with. Think the AC IV cities vs Paris in AC Unity in terms of scale.

Not every game will use RT for their GI, their shadows and / or their reflections at launch, infact I'd be willing to bet it will be nearer the back end of the generation before it's adopted by the majority of even larger scale development houses for most of those facets of rendering.

RT for most things will definitely become the new standard in video game rendering but it will take a few more years to fully be embraced where it's used as standard for GI, reflections, shadows, physics and sound calculations all at the same time. Small steps.
 
Oct 31, 2017
2,164
Paris, France
Besides dynamic clothing, I really hope mesh blending becomes the norm for environments/terrain. It makes things look a lot less video gamey.

polycount.com

Battlefront's Level Construction - how can everything just sort of match and blend?

This video was posted in Slack group. https://vimeo.com/172293963

Outside of Frostbite games, and I think RDR2, I don't see many games doing this. It is such a little thing but it makes such a huge difference.

EDIT: Just look at this...the game looks amazing, but how dare they not blend the rocks with the terrain.
z425bBw.jpg

I'm sad, you could have picked floating grass or treas.

It's in so much open world I guess it has to do with the design process and pipeline. First a procedural genereted terrain, then one team with a brush, then another one, and another... and in the end either it has been overlooked or it's worse to fix than to leave it be.

EDIT : So I just saw the video you quoted, impressive stuff.
 

Afrikan

Member
Oct 28, 2017
16,966
Besides dynamic clothing, I really hope mesh blending becomes the norm for environments/terrain. It makes things look a lot less video gamey.

polycount.com

Battlefront's Level Construction - how can everything just sort of match and blend?

This video was posted in Slack group. https://vimeo.com/172293963

Outside of Frostbite games, and I think RDR2, I don't see many games doing this. It is such a little thing but it makes such a huge difference.

EDIT: Just look at this...the game looks amazing, but how dare they not blend the rocks with the terrain.
z425bBw.jpg

Does PS4 game Dreams do something similar?
 

Unknown

Member
Oct 29, 2017
260
Direct X ML solution:
carcompare.png

This likely isn't a good representation of what a machine learning system could produce from an aliased rendered output, as this is showing upscaling a dowsampled high res image, so all that missing info that it needs to reconstruct already exists - it's just averaged together. Not the case in an aliased image.
 

ken_matthews

Banned
Oct 25, 2017
838
Does PS4 game Dreams do something similar?


I don't know, mesh blending like that is the result of programmatic vertex painting where the blend is based off of the mesh intersection length. There is nothing next-gen about vertex painting (many game engines can do it by hand, and many games use it in various ways), but this is such a little thing that really goes a long way to make games look a lot better, to make them look a lot less like video games. In large part, it is why the environments in Battlefront and RDR2 look so good; all the assets are blended into the terrain and you rarely if ever see any seams. This is probably my biggest graphics pet peeve ever since I noticed it, it is such an eye sore. Honestly, if I had one wish for graphics going forward, then it would be for mesh blending to become the standard for every game.
 
Last edited:

MajesticSoup

Banned
Feb 22, 2019
1,935
Possibly. We know MS has int4/8 for ML, but there's no dedicated hardware for it so they'd be taking away CUs for the purposes of "dlss". That's my understanding of it at least. It would also probably not be anywhere as good as dlss2.0.
Good chance DLSS 2.0 is less reliant on tensor cores than DLSS 1.0, could maybe even do away with them completely in future cards.
1. DLSS "1.9" ran on shader cores, not the tensor cores.
2. DLSS 2.0 no longer uses per game training, uses 'generalized training.'
3. Freestyle came out after DLSS to compete with RIS which looked better than DLSS 1.0

Leads me to believe that DLSS 2.0 seems more of an evolution of Freestyle.
 

Nooblet

Member
Oct 25, 2017
13,621
Advent Children wasn't a state of the art CGI in 2005 like Avatar was in 2009 (and still is). Advent Children was a budget CGI.

This is 2006 CGI. Games aren't even close.


Davy Jones remains one of the most impressive CG characters to this date.
The way his skin lights up when he is lighting his cigar is just phenomenal, the sub surface scattering looks perfect and the intensity of lighting on his skin matches perfectly with the intensity of lighting on the human actor.

Bruh game's just barely have raytracing in them. Literally every shot of Avatar is ridiculous and doing things way beyond what we can do in games.

That water at 0:46 is just something else.
Still blows my mind to this day.
 

dgrdsv

Member
Oct 25, 2017
11,843
Theres still a LOT developers can do to improve rasterization. Even without any RT games developed from the ground up for the new consoles would look far better than anything available today.
They will look better. "Far better" though? Nope.

Most advantages in graphics on next gen won't come from "improvements of rasterization".
 

icecold1983

Banned
Nov 3, 2017
4,243
They will look better. "Far better" though? Nope.

Most advantages in graphics on next gen won't come from "improvements of rasterization".

I disagree but we will see soon enough. Save this post that i predict next gen games on ps5 and Xbox will look far better than RTX Control while using whats likely to be highly reduced levels of RT.
 

AudiophileRS

Member
Apr 14, 2018
378
Based on the hardware at the time, Avatar had access to up to ~320 TFlop/s of CPU power and ~100TB of RAM; with each frame quoted as taking several hours, so lets say 3+hrs per frame. Of course, offline rendering is a very different type of operation.

Not only does it have a power advantage but it has ~325,000-650,000x the amount of time to render each frame as games need 16-34ms per frame depending on the target.

The crazy thing is, when we do see real-time stuff that is predominately comparable to Avatar, we'll likely see it on hardware with much, much less raw power and it will still be happening with a minuscule time-budgets, we're talking less than 0.00016%.

Hell, it's crazy we already get what we do. When you factor in power (vs current gen) and time, the frame budget could well have been well into the 100,000,000x region. Avatar is vastly superior looking and will remain so for some time. But do current games look 100,000,000x less good? I'd certainly give game devs a lot more credit than that...

Two things for sure: I can't wait to see what devs will do next-gen and I can't wait to see what can be done with the Avatar sequels with 11 or so years of advances.
 

Dictator

Digital Foundry
Verified
Oct 26, 2017
4,930
Berlin, 'SCHLAND
Good chance DLSS 2.0 is less reliant on tensor cores than DLSS 1.0, could maybe even do away with them completely in future cards.
1. DLSS "1.9" ran on shader cores, not the tensor cores.
2. DLSS 2.0 no longer uses per game training, uses 'generalized training.'
3. Freestyle came out after DLSS to compete with RIS which looked better than DLSS 1.0

Leads me to believe that DLSS 2.0 seems more of an evolution of Freestyle.
DLSS 2.0 is nothing like sharpening, you can read about it by watching the GTC presentation if you wish. Freestyle is just Single frame post processing.
 

Dan Thunder

Member
Nov 2, 2017
14,017
I like the way that comparisons to CGI get downgraded with every reply. It's like:

"I expect games to look like [ridiculously expensive movie with state of the art CGI] next-gen"

"But everything in the image is fully ray-traced for every frame and each image takes 24 hours to render!"

".....well obviously I didn't mean they'd do that but everything else is achievable"

"But their models contain 10x more polygons than these machines will realistically be able to handle?!"

"......yes, well clearly I didn't mean they'd use the same level of details for their models, but that aside. Identical."

"But their servers have 100's of gb's of RAM to help process the mass of information needed to keep the CGI detailed enough"

"....everyone knows that and no-one expects the same level of texture quality but they're pretty much going to be the same."


I think anyone who believes that we're going to get gameplay that looks like top of the range CGI is going to be disappointed. However, despite that people should still be amazed at what's produced when you consider that the hardware used for games is not only exponentially less powerful than the CGI farms used for films but also each image also has to be rendered within a 30th/60th of a second rather than 12-24 hours.
 
Jan 21, 2019
2,902
We are going to see some amazing graphics from the next gen consoles, but if they aren't even capable of hitting Control levels of ray tracing, then I think we might end up seeing a generational gap between next gen and high PC really soon after they launch. I agree, a mid-gen refresh will be a must.

That is just funny. I switched between RT on and off in Control more times than I can count. There is not a generational difference between the two. If this will be the difference between PC and console than I'm happy. A generational difference is Skyrim to Red Dead 2. Not Control with RT on and off.
 
Last edited:

EVIL

Senior Concept Artist
Verified
Oct 27, 2017
2,782
I like the way that comparisons to CGI get downgraded with every reply. It's like:

"I expect games to look like [ridiculously expensive movie with state of the art CGI] next-gen"

"But everything in the image is fully ray-traced for every frame and each image takes 24 hours to render!"

".....well obviously I didn't mean they'd do that but everything else is achievable"

"But their models contain 10x more polygons than these machines will realistically be able to handle?!"

"......yes, well clearly I didn't mean they'd use the same level of details for their models, but that aside. Identical."

"But their servers have 100's of gb's of RAM to help process the mass of information needed to keep the CGI detailed enough"

"....everyone knows that and no-one expects the same level of texture quality but they're pretty much going to be the same."


I think anyone who believes that we're going to get gameplay that looks like top of the range CGI is going to be disappointed. However, despite that people should still be amazed at what's produced when you consider that the hardware used for games is not only exponentially less powerful than the CGI farms used for films but also each image also has to be rendered within a 30th/60th of a second rather than 12-24 hours.
Yep, the power gap between realtime and CGI is astronomical. Now we can do some impressive real time cutscenes involving a lot of fakery and shortcuts and not having gameplay code in the background that eat up a large part of perfroamance and you can just cram it all into higher resolution models, higher quality lighting and effects and they get you closer and closer each generation. but to expect gameplay graphics that looks like even 10 year old CGI is nonsense and wont happen.
 

Deleted member 11276

Account closed at user request
Banned
Oct 27, 2017
3,223
I wouldn't expect them to reach DLSS, because RDNA2 uses CU for ML iirc. This means you take away resources. Remains to be seen how good the consoles actually are at ML.

Direct X ML solution:
carcompare.png
That reminds me of these HQ4x filters on emulators.It doesn't seem to add detail via reconstruction like DLSS. Still, it might be good enough so that a resolution like 1080p would still look good on a 4K TV, it's better than nothing I guess.
 
Last edited:

Dictator

Digital Foundry
Verified
Oct 26, 2017
4,930
Berlin, 'SCHLAND
That is just funny. I switched between RT on and off in Control more times than I can count. There is not a generational difference between the two. If this will be the difference between PC and console than I'm happy. A generational difference is Skyrim to Red Dead 2. Not Control with RT on and off.
I think it is generational in one aspect that I like a lot. Since RT works blanko in every scene in the game it has the propensity to turn gameplay graphics into cutscene graphics. With rasterisation a cutscene requires you to jack up shadow map resolution to 4096X4096 for one light and place like 2 or 3 fill or rim lights around characters in scenes to make them look really great. Or make sure the camera never leaves human eye level so the SSR does not break. But with RT, you can pause any one moment of the game and everything still holds up in gameplay. It really elevates the character models to look more like the cutscenes instead of a big discrepancy between the two since they are consistently lit at the highest quality possible. Same thing happens IMO in Metro Exodus where models in the overworld look quite a bit better.
 

icecold1983

Banned
Nov 3, 2017
4,243
To clarify. Some scenes benefit from rt much more than others. But if there are no reflective surfaces, it's really a minute detail and nothing to fuss about.
Minecraft and Quake are the only RTX games where id say they look a generation or more better. A generational gap when referring to net visual sum is pretty huge. Control can certainly look a decent amt better given the scene. But its still within the constraints of looking mostly similar to the console versions in net visual sum.
 

Thera

Banned
Feb 28, 2019
12,876
France
This is probably my biggest graphics pet peeve ever since I noticed it, it is such an eye sore.
Agreed.
Even with my crappy photoshop job, it looks so much better with mesh blending:

dbXLwvX.gif
Are you telling me the first image is the actual game ? It's terrible :(
I think anyone who believes that we're going to get gameplay that looks like top of the range CGI is going to be disappointed.
I really hope nobody is genuinely expecting that.
A generational difference is Skyrim to Red Dead 2. Not Control with RT on and off.
"PC Games Insider also reports that the development budget for Control is somewhere between EUR20 million and EUR30 Million"

RDR2 is complicated but definitively more than $100M. Comparng them isn't relevant.
 

ken_matthews

Banned
Oct 25, 2017
838
That is just funny. I switched between RT on and off in Control more times than I can count. There is not a generational difference between the two. If this will be the difference between PC and console than I'm happy. A generational difference is Skyrim to Red Dead 2. Not Control with RT on and off.

I totally agree with you, ray tracing in Control looks good, but it's definitely nowhere near a generational leap when you turn it on and off. That was not what I meant to say. What I meant to say was ray tracing in Control is just the start. With the new Nvidia graphics cards and dx12 update hitting soon, games in two or three years might look insane with much more extensive ray tracing than Control, but consoles might be stuck with Control's level. And don't get me wrong, Control's ray tracing is impressive, but it's nothing crazy, and it doesn't even come close to its potential.

EDIT: I just want to echo what Dictator said. Control's ray tracing is impressive but if you want to see generational leaps by toggling it on and off, then Metro Exodus will get you a lot closer to that effect. Its RT GI looks pretty insane at times.
 
Last edited:

Black_Stride

Avenger
Oct 28, 2017
7,387
Im looking forward to XGS next more realistic title, a bunch of their studios are using UnReal Engine and the advancements to that engine have been incredible.
The more studios using it the easier it is for them to share techniques and technology.

Real time Unreal from small studios is already reaching levels we only expected from AAA studios
Give AAA studios the engine and nextgen will be truly glorious

5NTUa57.gif



vwzEK9P.gif




hirokazu-yokohara-ue4-girl-c.jpg



hirokazu-yokohara-ue4-girl-b.jpg

All these are one man teams.
 

ken_matthews

Banned
Oct 25, 2017
838
Very nice I presume engines like UE4 do things like this for you?

The only game engine I've ever seen or read about that does the blending automatically is Frostbite, and I linked to a polycount thread about it in my initial post about it. I remember I saw a video on Doom Eternal graphical features and it showed vertex painting to blend some of the assets into the terrain, but I think all that was done by hand (cant find the video at the moment). Vertex painting should be a pretty standard thing in game engines by now, UE4 has had it for a while, but I'm pretty sure it doesn't do the auto blending for you as a baked in feature of the engine, you have to script it out.

Honestly, I think it's just something that most developers overlook or don't prioritize because it's a not so obvious thing, and it's easy to under appreciate how much of an impact it has on the overall visual appearance of an environment.
 

Hey Please

Avenger
Oct 31, 2017
22,824
Not America
I am going quote a (kind copy past given the thread is closed now), Dictator post that provides great examples for his post #1633 above:

Dictator said:
They are probably not "the best" since the models themselves have that massive subjective quality to them. But they definitely do not lose as much of their splendor in shadow / in gameplay due to the RT GI. Here some screens:

Voxel Probe GI
1cikd9.jpg


RT GI
28ektt.jpg


Voxel Probe GI
3fljtz.jpg


RT GI
4drkz8.jpg


If you notice, the skin is evently lit with the Voxel Probe GI. It essentially glows in shadow and lacks much of the depth - also none of the sub-surface scattering is showing.

RT GI on the other hand has 1/multiple directions of light hitting the face, and multiple shadows... not just directionless AO and directionless probes which just wrap "light" around the head.

If someone posted that picture of Drake in Gameplay, in a scene where there is only indirect lighting in shadow, it looks very much the same.
SHOW IMAGE

Notice how the light on drake in the above on the right wraps around and is monotone... it glows just like the Voxel Probe GI in metro :D
 

ken_matthews

Banned
Oct 25, 2017
838
^ That is a generational leap in lighting, just by switching ray tracing on and off.

EDIT: And this is what I was trying to say. The difference in the Metro shots is just from ray-traced GI. It might get really crazy in 2 years time when you'll see 3rd gen Nvidia RTX cards and tech. The high end PC scene might see some pretty crazy ray traced visuals during the first quarter of next gen.
 
Last edited:
OP
OP
ResetEraVetVIP
Nov 20, 2019
1,861
I think it is generational in one aspect that I like a lot. Since RT works blanko in every scene in the game it has the propensity to turn gameplay graphics into cutscene graphics. With rasterisation a cutscene requires you to jack up shadow map resolution to 4096X4096 for one light and place like 2 or 3 fill or rim lights around characters in scenes to make them look really great. Or make sure the camera never leaves human eye level so the SSR does not break. But with RT, you can pause any one moment of the game and everything still holds up in gameplay. It really elevates the character models to look more like the cutscenes instead of a big discrepancy between the two since they are consistently lit at the highest quality possible. Same thing happens IMO in Metro Exodus where models in the overworld look quite a bit better.
This.
 
OP
OP
ResetEraVetVIP
Nov 20, 2019
1,861
Im looking forward to XGS next more realistic title, a bunch of their studios are using UnReal Engine and the advancements to that engine have been incredible.
The more studios using it the easier it is for them to share techniques and technology.

Real time Unreal from small studios is already reaching levels we only expected from AAA studios
Give AAA studios the engine and nextgen will be truly glorious

5NTUa57.gif



vwzEK9P.gif




hirokazu-yokohara-ue4-girl-c.jpg



hirokazu-yokohara-ue4-girl-b.jpg

All these are one man teams.
This. Look at the quality....its great.