• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.
  • We have made minor adjustments to how the search bar works on ResetEra. You can read about the changes here.
Aug 30, 2020
2,171
This update has provided a large performance increase on my 3080 when using the Vulkan extensions instead of Nvidia's and/or using the new TAAU (probably mainly the latter?). Latest gen AMD GPU users should be able to play it now, although the Steam overlay may be causing them problems?

Hey, everyone! Today we're releasing v1.4.0, featuring added support for final Vulkan Ray Tracing API, and enabling dynamic selection between the pre-existing NVIDIA VKRay and the new Khronos extension backends.

New Features:

  • Added support for final Vulkan Ray Tracing API. The game can now run on any GPU supporting `VK_KHR_ray_tracing_pipeline` extension
  • Added temporal upscaling, or TAAU, for improved image quality at lower resolution scales.


Fixed Issues:



Denoiser Improvements:

  • Implemented a new gradient estimation algorithm that makes the image more stable in reflections and refractions.
  • Implemented sampling across checkerboard fields in the temporal filter to reduce blurring.
  • Improved motion vectors for multiple refraction, in particular when thick glass is enabled.
  • Improved the temporal filter to avoid smearing on surfaces that appear at small glancing angles, e.g. on the floor when going up the stairs.
  • Improved the temporal filter to make lighting more stable on high-detail surfaces.



Misc Improvements:

  • Added git branch name to the game version info.
  • Improved the console log to get more information in case of game crashes.
  • Increased precision of printed FPS when running timedemos.
  • Reduced the amount of stutter that happened when new geometry is loaded, like on weapon pickup.
  • Replaced the Vulkan headers stored in the repository with a submodule pointing to https://github.com/KhronosGroup/Vulkan-Headers
  • Static resolution scale can now be set to as low as 25%.
  • Vulkan validation layer can now be enabled through the `vk_validation` cvar.
  • Updated SDL2 version to changeset 13784.


Source
 

ILikeFeet

DF Deet Master
Banned
Oct 25, 2017
61,987
surprised they didn't add DLSS to this game. thought it would have been one of the first to receive it
 

pksu

Member
Oct 27, 2017
1,240
Finland
The problem with DLSS is that Nvidia is using open source Quake release licensed under GPL and DLSS implementation is closed source. One would think it would be possible to write some sort of dynamic module loader capable of loading and using a closed-source DLSS library but that would be also against GPL too as long as the components share "complex data structures" IIRC.
 

Crowboy

Member
Oct 25, 2017
168
My Geforce Experience just notified me that there is a new driver update for this too.
 

neoak

Member
Oct 25, 2017
15,263
Shame there isn't some way to enable software ray-tracing, even if it's at a poor framerate. I'd like to see what the hubbub is about.
Open a new PowerPoint and paste screencaps of the videos. Then present the PowerPoint in full screen, navigate the slides.

There, Full software experience.
 

Eblo

Member
Oct 25, 2017
1,643
Shame there isn't some way to enable software ray-tracing, even if it's at a poor framerate. I'd like to see what the hubbub is about.
You can play Quake II RTX on GTX 1000 series cards at least. I think those do it via software? I tried on my 1070. All minimum settings netted me about 15 FPS lol
 

Tora

The Enlightened Wise Ones
Member
Jun 17, 2018
8,640
No idea how it used to run, but at 1440p I'm getting 140fps with a 3080
 
Oct 25, 2017
2,935
Does the new Vulkan support in 460.89 change behavior/performance for pascal cards, or does the 1.4 update to the game do that on its own? The nvidia subreddit says there is a flickering problem on 1080 Ti's with this new driver. I'm still on 455.71, and am considering rolling back to 446.14 to remove the SteamVR stutter issue.

EDIT: Just rolled back and now I can pick 10 bit color at 4k in my control panel again. Hmm.
 
Last edited:
OP
OP
Aug 30, 2020
2,171
I don't get it. Why Quake 2? Why not other classic FPSes?

Mainly because a few years back someone wrote a software raytraced solution for Quake 2, then someone wrote a CUDA accelerated solution based on the first. Then Nvidia built this upon those previous works.

Why the first guy did it? I think the environments are well set up for real-world lightning. There are actual physical light sources (that is, 3D meshes that textures for lights were based upon) for the environments. That's something the original Quake only has some of (there's a lot of fake lights without any artistic sources placed frequently in Quake 1).

Without that critical piece a developer would have to go in and do ART (decide the lighting) to have raytracing work.

That said, someone is working on Quake 1 path tracing right now: https://twitter.com/lh0xfb
 

ILikeFeet

DF Deet Master
Banned
Oct 25, 2017
61,987
CkE8Kw3pzex6T3ZyDTCyBU.png

 

ILikeFeet

DF Deet Master
Banned
Oct 25, 2017
61,987
So about Turing performance? Honestly not bad for first gen though.
from the initial release of the renderer
Q2-RTX-2160.png

Q2-RTX-1080.png

wccftech.com

Quake 2 RTX Is Out And What Kind Of Performance You Can Expect

Quake 2 may be a 22-year-old game and run on just about anything with an electrical current, but that's not what Quake 2 RTX is about. What started out as a mod to Quake 2, by Christoph Schied, known as Q2VKPT (Quake 2 Vulkan Path Tracing) is now a full standalone release thanks to the team at...

given the price, I wouldn't really call that "not bad for first gen". it just shows a possible flaw in AMD's design. there's probably benefits to their design, but it's currently not showing in RT games. AMD's probably banking on games not using RT much if at all
 

SapientWolf

Member
Nov 6, 2017
6,565
Also has native Vulkan multi GPU support for some reason.

Looking at the charts, I don't think balls out RT is going to be a thing on the consoles this gen. RDNA2 just doesn't seem cut out for it. That could end up creating a pretty wide gulf between PC and console lighting quality eventually.

The 3080 doesn't run out of poop until 4k but I think the smart money is on waiting one more GPU gen if you're all on board the RT train. AMD's RT performance should be good and Nvidia's should be great.
 

Akronis

Prophet of Regret - Lizard Daddy
Banned
Oct 25, 2017
5,451
Not to pile on AMD but they're also on a legit 7nm node while Nvidia was on 12nm with Turing.

I'm not even that matters in this case, they just don't have the fully dedicated hardware on the board for RT. They tried to find a cheaper alternative to NVIDIA's tensor RT cores by just straight up not having them.

Despite paying that early adopter fee with the 2080 Ti, I feel like NVIDIA's approach with dedicated tensor RT cores was absolutely the right move.
 
Last edited:

Veliladon

Member
Oct 27, 2017
5,558
given the price, I wouldn't really call that "not bad for first gen". it just shows a possible flaw in AMD's design. there's probably benefits to their design, but it's currently not showing in RT games. AMD's probably banking on games not using RT much if at all

Possible flaw? They're doing BVH on the shader cores instead of in fixed function hardware like Nvidia. Which is basically where the problem is. Turn on RTX on an Nvidia card and you're limited to the throughput of the RT cores. Turning on ray tracing on an AMD card not only limits performance to whatever TFLOPs the ray accelerators can put out but it also takes FP32 throughput away from everywhere else at the same time.

Honestly, we should just be thankful that RTG actually made a product that matches Nvidia in raster performance for once. After all, RTX only came out *checks notes* two years ago.
 

ILikeFeet

DF Deet Master
Banned
Oct 25, 2017
61,987
Possible flaw? They're doing BVH on the shader cores instead of in fixed function hardware like Nvidia. Which is basically where the problem is. Turn on RTX on an Nvidia card and you're limited to the throughput of the RT cores. Turning on ray tracing on an AMD card not only limits performance to whatever TFLOPs the ray accelerators can put out but it also takes FP32 throughput away from everywhere else at the same time.

Honestly, we should just be thankful that RTG actually made a product that matches Nvidia in raster performance for once. After all, RTX only came out *checks notes* two years ago.
"flaw" is the wrong word to use, rather it might have been a bad call, in hindsight. it's not like AMD was in the dark when RT was brought up when Microsoft was laying the groundwork for DX12. they probably thought it just wasn't gonna take off beyond single effects like shadows or AO (and they might still be right) at 30fps. maybe it's a case of just not having Nvidia's R&D budget given where AMD was prior to Navi and Zen.

here's a comparison of the NV extension and the standard

 

brain_stew

Member
Oct 30, 2017
4,731

GameAddict411

Member
Oct 26, 2017
8,519
That's actually better than I thought from RDNA2 in a path traced title. Still utterly miserable performance considering the 6800XT is more expensive than a 3080 currently but better than I thought it would be based on Minecraft and Control.
Dude, that's noticably worse then a 2080 ti. If dlss gets added to this game, it would be amazing.
 

BAW

Member
Oct 27, 2017
1,940
Mainly because a few years back someone wrote a software raytraced solution for Quake 2, then someone wrote a CUDA accelerated solution based on the first. Then Nvidia built this upon those previous works.

Why the first guy did it? I think the environments are well set up for real-world lightning. There are actual physical light sources (that is, 3D meshes that textures for lights were based upon) for the environments. That's something the original Quake only has some of (there's a lot of fake lights without any artistic sources placed frequently in Quake 1).

Without that critical piece a developer would have to go in and do ART (decide the lighting) to have raytracing work.

That said, someone is working on Quake 1 path tracing right now: https://twitter.com/lh0xfb

Wow, great explanation, thanks!
It's kind of impressive of how they keep working on it and improving it instead of treating it like a one-off proof of concept.
I can confirm it both looks better and runs faster on my 3080. Previously 1080p upscaled to 2160p was blur city, not anymore. I don't know what temporal upscaling is but it works.
 
Nov 2, 2017
2,275
So about Turing performance? Honestly not bad for first gen though.
It's not even close to Turing here. There isn't a big gap in performance for raytracing in games between Ampere and Turing. Ampere is only slightly better in raytracing than Turing in most games. The more raytracing the better it becomes but not by a huge amount.

For example the 3070 is only 7% faster at 1440p than the 2080Ti, which is its raster equivalent Turing card, in Quake 2.
 

dgrdsv

Member
Oct 25, 2017
11,885
I'm not even that matters in this case, they just don't have the fully dedicated hardware on the board for RT. They tried to find a cheaper alternative to NVIDIA's tensor cores by just straight up not having them.
Tensor cores have nothing to do with ray tracing.

They're doing BVH on the shader cores instead of in fixed function hardware like Nvidia.
BVH is built on CPU and shader cores in both cases (at least I think so, it's possible that AMD is building it solely on CPU).
BVH testing is done on fixed function units in both cases.
Intersection evaluation is done on dedicated MIMD processors by NV and on shader cores by AMD.
The latter is the most likely reason for a comparative performance deficit. But it's also possible that other parts of the pipeline are running faster on NV h/w.

So about Turing performance? Honestly not bad for first gen though.
Turing launch: "Wtf is this shit, how can RT be so slow, why is it so much slower than rasterization?"
RDNA2 launch two years later: "So it's about Turing performance with RT, not bad, despite it being twice slower than Turing!"
 

Bluelote

Member
Oct 27, 2017
2,024
Also has native Vulkan multi GPU support for some reason.

Looking at the charts, I don't think balls out RT is going to be a thing on the consoles this gen. RDNA2 just doesn't seem cut out for it. That could end up creating a pretty wide gulf between PC and console lighting quality eventually.

The 3080 doesn't run out of poop until 4k but I think the smart money is on waiting one more GPU gen if you're all on board the RT train. AMD's RT performance should be good and Nvidia's should be great.

check the digital foundry video on the watchdogs ray tracing on consoles, there are many ways to cut back the effects while still retaining a lot of it,
also Quake 2 rtx is heavily optimized for Nvidia, not really Optimized for AMD I would think,
no doubt Nvidia is ahead in this approach they pioneered not long ago, but it surely can be explored on consoles, and there is always 30FPS.

6800xt is clearly delivering some playable performance on this, despite not being great at this sort of thing.
 

dodo021

Member
Oct 27, 2017
186
Does AMD use a denoiser for the RT part like Nvidia or AMD calculate RT at native resolution without denoising ?
 

Pottuvoi

Member
Oct 28, 2017
3,064
I wonder how BVH and actual tracing part would change if developer would build this from ground up for AMD hardware.
Does AMD use a denoiser for the RT part like Nvidia or AMD calculate RT at native resolution without denoising ?
Both use same denoiser using compute.
Denoising really doesn't have anything to do with resolution.
 
Last edited:

Pipyakas

Member
Jul 20, 2018
549
AMD not having a software RT fallback in their driver on both Windows and Linux is killing me here. I want to murder my 580 just to experiment with RT god damn it

On another note, seems like 6800XT is getting roughly 50% performance of the 3080 in a fully path traced game, ouch
 

brain_stew

Member
Oct 30, 2017
4,731
Dude, that's noticably worse then a 2080 ti. If dlss gets added to this game, it would be amazing.

The 6800XT is actually slower than a 2070 there. A 2080Ti isn't even a ballpark comparison.

I've seen a lot of talk about "1st generation" RT from AMD, but it is far slower than Nvidia's 1st generation RT solution. It's just slow due to AMD accelerating less of the RT pipeline in hardware than Nvidia. This can't be hand waived away at just 1st generation hardware.
 

dgrdsv

Member
Oct 25, 2017
11,885
I wonder how BVH and actual tracing part would change if developer would build this from ground up for AMD hardware.
Would probably change back into Quake 2 running under OpenGL without RT.
Unless someone would figure out how to fit the whole BVH into 128MB IC and how to do full on path tracing with zero ray incoherence.
 

Akronis

Prophet of Regret - Lizard Daddy
Banned
Oct 25, 2017
5,451
Tensor cores have nothing to do with ray tracing.

Are they just for accelerating ML loads for things like DLSS then? I had thought they benefited RT in a more direct way beyond that.

EDIT: I'm dumb, I was combining the RT cores and tensor cores together in my brain
 
Last edited:

Veliladon

Member
Oct 27, 2017
5,558
BVH testing is done on fixed function units in both cases.
Intersection evaluation is done on dedicated MIMD processors by NV and on shader cores by AMD.

You sure? This was AMD's presentation:

ray-tracing-100867214-orig.jpg


Building the BVH set would have to happen on the CPU, yes but traversal of the BVH is handled with shader code. Ray accelerator does intersections.
 

SapientWolf

Member
Nov 6, 2017
6,565
check the digital foundry video on the watchdogs ray tracing on consoles, there are many ways to cut back the effects while still retaining a lot of it,
also Quake 2 rtx is heavily optimized for Nvidia, not really Optimized for AMD I would think,
no doubt Nvidia is ahead in this approach they pioneered not long ago, but it surely can be explored on consoles, and there is always 30FPS.

6800xt is clearly delivering some playable performance on this, despite not being great at this sort of thing.
I was thinking more along the lines of using RT for shadows, reflections, global illumination, and ambient occlusion at the same time, which is a basically a no go on PC without DLSS right now. I predict that most console games will pick and choose a few effects rather than implement it across the board. I personally think AO and reflections are where you get the most bang for your buck.
 

Spoit

Member
Oct 28, 2017
3,987
It's not even close to Turing here. There isn't a big gap in performance for raytracing in games between Ampere and Turing. Ampere is only slightly better in raytracing than Turing in most games. The more raytracing the better it becomes but not by a huge amount.

For example the 3070 is only 7% faster at 1440p than the 2080Ti, which is its raster equivalent Turing card, in Quake 2.
This, along with minecraft and control, is one of the handful of games where the performance is not raster limited (and the AMD cards actually have competititive raster performance, so that arguably makes the difference even worse)
 

dgrdsv

Member
Oct 25, 2017
11,885
Are they just for accelerating ML loads for things like DLSS then?
Yeah. They also handle FP16 shading on Turing and Ampere.

I had thought they benefited RT in a more direct way beyond that.
There was some (misleading?) talk about using TCs for RT denoising but that hasn't really happened as no game with RT released at the moment are using them for that.

Building the BVH set would have to happen on the CPU, yes but traversal of the BVH is handled with shader code. Ray accelerator does intersections.
Correct, although if AMD builds BVH completely on CPU it's another weak point of their RT approach - NV is building/updating BVH's BLAS on GPU, only TLAS are built on CPU.
Intersection testing is "BVH testing", same thing. Both NV and AMD are doing this on FF units.
Hit (intersection) evaluation is done on shading SIMD by AMD and dedicated MIMD inside RT cores by NV.