• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.
  • We have made minor adjustments to how the search bar works on ResetEra. You can read about the changes here.

Ploid 6.0

Member
Oct 25, 2017
12,440
The power of turing... Nvida trying to limit basic features like this just make me want AMD. They did finally allow freesync and I'm grateful but moves like this put me off. Thankfully I'm not locked into their hardware with an expensive Gsync monitor.
 

Kalik

Banned
Nov 1, 2017
4,523
It really depends on the game.. and how sensitive you are to input latency/response time. If you've never worried about it before, you probably shouldn't start now. In the case of these games, between "Off" and "Ultra" there's a frame or 2 worth of latency (16ms = 1 frame @ 60hz) so it could be a noticeable improvement to you. If you're not sure. Just set it to "On" and forget about it.

so why not just leave it at Ultra to play it safe?...is there any negative effect to leaving it always on Ultra for all games?...with previous Nvidia drivers I always left the 'Maximum Pre-Rendered Frames' option at 'Use the 3D Application Setting'
 

Smash-It Stan

Member
Oct 25, 2017
5,302
True true.



It really depends on the game.. and how sensitive you are to input latency/response time. If you've never worried about it before, you probably shouldn't start now. In the case of these games, between "Off" and "Ultra" there's a frame or 2 worth of latency (16ms = 1 frame @ 60hz) so it could be a noticeable improvement to you. If you're not sure. Just set it to "On" and forget about it.
So if youre hitting 60-100 fps, its safe to turn on without much performance impact? Can't I use v-sync with this option and basically have delay free, screen tear free 60 fps gaming?
 

Talus

Banned
Dec 9, 2017
1,386
so why not just leave it at Ultra to play it safe?...is there any negative effect to leaving it always on Ultra for all games?...with previous Nvidia drivers I always left the 'Maximum Pre-Rendered Frames' option at 'Use the 3D Application Setting'
I honestly can't answer that. The drivers just came out so I don't know if there's any negative effects to leaving it on Ultra for all games. I always have set Max PR Frames to 1 and never had any issues. That's equivalent to the On setting. Which is why I recommended it.

I guess set it at Ultra and know that you'll have the lowest latency possible. If you notice any issues in any game that weren't there before, then Switch it to on and see if that fixes the problem. If it does, then just leave it on.
 

Talus

Banned
Dec 9, 2017
1,386
The power of turing... Nvida trying to limit basic features like this just make me want AMD. They did finally allow freesync and I'm grateful but moves like this put me off. Thankfully I'm not locked into their hardware with an expensive Gsync monitor.
Basic features? Uh, which GPU vendors offer this basic feature?
 

Kalik

Banned
Nov 1, 2017
4,523
I honestly can't answer that. The drivers just came out so I don't know if there's any negative effects to leaving it on Ultra for all games. I always have set Max PR Frames to 1 and never had any issues. That's equivalent to the On setting. Which is why I recommended it.

I guess set it at Ultra and know that you'll have the lowest latency possible. If you notice any issues in any game that weren't there before, then Switch it to on and see if that fixes the problem. If it does, then just leave it on.

if Max_Prerendered_Frames= 1 is the preferred setting (which is the equivalent of Low Latency= On) then why is the default setting to Off with these new drivers?...so what was the equivalent of Maximum Pre-Rendered Frames'= 'Use the 3D Application Setting' equivalent to with the older drivers- Low Latency= Off?
 

Visuwell

Member
Jan 22, 2019
98
Oddly enough tested this out in Far Cry New Dawn. Frames went down from 80 ish to 60-70 on ultra. Gtx 2070 Max Q. Any thoughts / tips?
 

Talus

Banned
Dec 9, 2017
1,386
if Max_Prerendered_Frames= 1 is the preferred setting (which is the equivalent of Low Latency= On) then why is the default setting to Off with these new drivers?...so what was the equivalent of Maximum Pre-Rendered Frames'= 'Use the 3D Application Setting' equivalent to with the older drivers- Low Latency= Off?
Off = 'Use the 3D Application Setting' (meaning the game engine will queue 1-3 frames) This is the default and hasn't changed.
On = Limits engine to 1 frame
Ultra = 'Just in time' (meaning the CPU spits the frame to the GPU for rendering just as soon as it's needed.. there's no queue. For simplicity sake, let's just call it 0 frames. Unless someone can explain why it's wrong to do so lol)
 

low-G

Member
Oct 25, 2017
8,144
if Max_Prerendered_Frames= 1 is the preferred setting (which is the equivalent of Low Latency= On) then why is the default setting to Off with these new drivers?...so what was the equivalent of Maximum Pre-Rendered Frames'= 'Use the 3D Application Setting' equivalent to with the older drivers- Low Latency= Off?

Max Pre-Rendered Frames traditionally was default of 3!
 

EatChildren

Wonder from Down Under
Member
Oct 27, 2017
7,046
Battlefield V DX12 performance is dogshit for me over DX11 anyway. On my 1080 Ti it just introduces a ton of stutter. DX11 is measurably smoother.

I'm interested to see the real world variables of ultra low latency. Turning off future frame rendering in BFV's menu borderline breaks the game, cutting a huge chunk out of the framerate.
 

devSin

Member
Oct 27, 2017
6,198
Confused... should we enable these new options by default? What's the negative impact by doing so?
Performance can be lowered in some situations (because the game doesn't have a queue of frames anymore, it's going to be more affected by frame-to-frame variance).

Lowering it improves latency because what you see was just generated (or generated the frame before if set to on). Some people will swear by it, but I doubt most people could ever even notice.
 

Talus

Banned
Dec 9, 2017
1,386
Battlefield V DX12 performance is dogshit for me over DX11 anyway. On my 1080 Ti it just introduces a ton of stutter. DX11 is measurably smoother.

I'm interested to see the real world variables of ultra low latency. Turning off future frame rendering in BFV's menu borderline breaks the game, cutting a huge chunk out of the framerate.
What CPU do you have? If you're CPU limited BF5 will be a stutterfest.
 

Veliladon

Member
Oct 27, 2017
5,563
TFW you have Gsync and don't care when a frame is dispatched to the monitor and just leave ultra low lag on.

giphy.gif
 

Ploid 6.0

Member
Oct 25, 2017
12,440
Basic features? Uh, which GPU vendors offer this basic feature?
If AMD's integer scaling is limited to it's newest cards I'd be shocked. Nvidia though tend to try to sell it's newer stuff or just limit things to sell other stuff. To tell you the truth I'm not worried about the feature but just them limiting it to turing cards. They put RTX on 10 series because they want devs to start using it more, yet this has to be limited to turing? Maybe it'll come to others later I don't know, but right now it doesn't look good to me.

Like I said, I'm shocked they added freesync support, but even then they made it seem like a limited amount of monitors worked, and when they showed it off they used monitors that are already problematic on AMD cards to show that it will likely not work well if it's not a gsync certified freesync monitor (there was no way my next card would be Nvidia if it still didn't support VRR or Freesync).

I see there's an app to get the feature without turing at least, but moves like this and Intel with it's K, vs non K, vs now i9 and motherboard features lock outs makes me want a more open product like AMD's. I'm only on nvidia now because people was paying double for AMD cards a while ago and I sold mine. Loved overclocking on Polaris (bios editing, memory string swapping, so much freedom, though I hear they locked some of that down for the cards after) vs Pascal.
 

Xiaomi

Member
Oct 25, 2017
7,237
Apex runs a little smoother for me at 1440p on my undervolted 2080. Everything else seems more or less the same, haven't really noticed a big difference using ultra low latency.
 

Edward850

Software & Netcode Engineer at Nightdive Studios
Verified
Apr 5, 2019
998
New Zealand
As some food for thought in regards to integer scaling being restricted to Turing cards (whether you like it or not or not is up to you, but I haven't seen anybody discuss why this is on a technical level yet so I figured I'll throw this information out there), it's most likely the reason why is that they are utilizing the Int32 cores of said Turing cards, something that Pascal and cards before it simply do not have.
Traditionally, trying to do integer processing on a GPU produces fuzzy results because what integers actually did in shaders requires processing them as floats regardless, so even when comparing what you'd think was integers wouldn't necessarily compare to exactly 0 or 1, for example.
Int32 cores solve this by outright doing what you'd expect with an integer value (no floating fuzziness, 1 will always equal 1, etc). So even the shaders that claim to be doing this already are technically faking it because your typical GPU can only calculate it as a float in the first place. This is a "fun" problem when dealing with colour palettes in restoring DOS era games.

This is all an educated guess of course, Nvidia haven't explained the sauce behind it yet, but I'm betting it's that.
 

DrDeckard

Banned
Oct 25, 2017
8,109
UK
Apex is flying for me. Is this black magic? I'm at 1440p maxed out and now hitting 130 -165 fps more than not on a rog swift. I'm sure it was between 90 and 120 fps before
 

Deleted member 14568

User requested account closure
Banned
Oct 27, 2017
2,910
fucking nvidia artificially limiting feature to prop the sale of turing wish AMD was competitive in the high end so i wouldn't have to deal with this BS
 
Last edited:

Banzai

The Fallen
Oct 28, 2017
2,598
Not seeing much of a difference on Apex legends with my 1060 6gb, but maybe my card wasn't really the intended target for this?
 

eddy

Member
Oct 25, 2017
4,763
As some food for thought in regards to integer scaling being restricted to Turing cards (whether you like it or not or not is up to you, but I haven't seen anybody discuss why this is on a technical level yet so I figured I'll throw this information out there), it's most likely the reason why is that they are utilizing the Int32 cores of said Turing cards, something that Pascal and cards before it simply do not have.

There are probably semi-good reasons. That said, the integers map cleanly onto binary32 up to 2^24 (see javascript) so I suspect a 'slow' implementation could be implemented that would be just fine in 99% of cases given the sort of content you'd want to do this on. Now, FP is notoriously 'finicky', but I'm sure the smart people at these companies could come up with a solution if they really wanted to. Heck, you could probably just have the driver do it on the CPU.

(IMO Intel and nVidia have set themselves to look bad if say AMD just goes for it and eats what must realistically be a small performance hit to give this to everyone.)
 
Last edited:

abracadaver

Banned
Nov 30, 2017
1,469
I now have the Variable Refresh Rate option in the windows graphics settings

Should I turn it on if I already have G-Sync? Any downsides to this?
 

Flappy Pannus

Member
Feb 14, 2019
2,353
Oct 25, 2017
41,368
Miami, FL
As some food for thought in regards to integer scaling being restricted to Turing cards (whether you like it or not or not is up to you, but I haven't seen anybody discuss why this is on a technical level yet so I figured I'll throw this information out there), it's most likely the reason why is that they are utilizing the Int32 cores of said Turing cards, something that Pascal and cards before it simply do not have.
Traditionally, trying to do integer processing on a GPU produces fuzzy results because what integers actually did in shaders requires processing them as floats regardless, so even when comparing what you'd think was integers wouldn't necessarily compare to exactly 0 or 1, for example.
Int32 cores solve this by outright doing what you'd expect with an integer value (no floating fuzziness, 1 will always equal 1, etc). So even the shaders that claim to be doing this already are technically faking it because your typical GPU can only calculate it as a float in the first place. This is a "fun" problem when dealing with colour palettes in restoring DOS era games.

This is all an educated guess of course, Nvidia haven't explained the sauce behind it yet, but I'm betting it's that.
cool insights, ty
 

Smokey

Member
Oct 25, 2017
4,178
Looks like the driver is back up on Nvidia's site, however it's the standard driver. Looks like I have the "DCH" driver, and they won't install for me because of that:


"Your system requires a DCH driver package which can be downloaded automatically with Geforce Experience"

*sigh*


Got around this by removing using DDU to remove driver, removed ethernet cable, installed Gamescom driver (without GFE), rebooted with ethernet, and here I am. So yeah, can confirm that it's back up and ready for download for those who don't want GFE forced on them during install.
 

Chance Hale

Member
Oct 26, 2017
11,901
Colorado
Keep messing around and interger scaling just won't activate, immediately reverts to no scaling or aspect ratio.

4K TV at native resolution, no dsr factors and YPbPr 422 which nvidia says is supported. Alas
 
Oct 25, 2017
2,950
Has anyone done any hard latency testing on the new driver's Ultra Low Latency mode for 60fps locked games?

It doesn't seem to cause me any problems online when playing Tekken 7. I can play just fine at 4k x 1.25 SS on my 1080 Ti/4770k combo, vsync off, DS4+official BT adapter.

The game seems just as snappy as ever, but it's been 2 months since I've played it on my home rig. It could just be placebo.
 

zombipuppy

Member
Oct 25, 2017
222
Keep messing around and interger scaling just won't activate, immediately reverts to no scaling or aspect ratio.

4K TV at native resolution, no dsr factors and YPbPr 422 which nvidia says is supported. Alas
Same here, I can't get it to stick. I'm not using any incompatible settings as far as I can tell. Just multiple monitors at different resolutions. Does that disqualify me?
 

Gitaroo

Member
Nov 3, 2017
8,087
This driver is legit, atleast for FH4. The game used to run at 50-60fps in 4k ultra + extreme graphics without MSAA on my 2080, so I had to drop the rest to 1800p and enable 2xAA. Now I can bump the res up to full 4X and keep the 2xAA, only time it drops frame in when the camera pan around before the race, and it's locked to 60fpa in gameplay.
 
Oct 25, 2017
7,695
Keep messing around and interger scaling just won't activate, immediately reverts to no scaling or aspect ratio.

4K TV at native resolution, no dsr factors and YPbPr 422 which nvidia says is supported. Alas

worked it out. it wont integer scale on my tv if i set it to 1080p but it DOES integer scale if i set it to 1920x1080 (not hdtv mode). Noticeable difference.
 

Deleted member 49611

Nov 14, 2018
5,052
yeah i ain't touching this driver. i'll let others guinea pig it. too many big changes to expect it to work without problems.
 

Skyfireblaze

Member
Oct 25, 2017
11,257
yeah i ain't touching this driver. i'll let others guinea pig it. too many big changes to expect it to work without problems.

Well just chiming in with my GTX 1070, zero perceivable issues, games run as good as ever if not better and Ultra Low Latency Mode also worked fine in the three games I tried so far. I don't use GFE so I can't comment on that end.
 

Skyebaron

Banned
Oct 28, 2017
4,416
My Superposition score went up on my 1080ti by about 100. FFXIV and Wolf:Youngblood were doing very fine in 4k high.
 

Hadoken

Member
Oct 25, 2017
306
The low latency on ultra does feel more responsive in Sekiro (FPS uncap) and AC: Odyssey, but I'm not sure if it's placebo tbh. What I did notice is with AC: Odyssey low latency on ultra has more consistent FPS than off. With it off the FPS graph goes up and down all over the place.
 

dgrdsv

Member
Oct 25, 2017
12,024
As some food for thought in regards to integer scaling being restricted to Turing cards (whether you like it or not or not is up to you, but I haven't seen anybody discuss why this is on a technical level yet so I figured I'll throw this information out there), it's most likely the reason why is that they are utilizing the Int32 cores of said Turing cards, something that Pascal and cards before it simply do not have.
Traditionally, trying to do integer processing on a GPU produces fuzzy results because what integers actually did in shaders requires processing them as floats regardless, so even when comparing what you'd think was integers wouldn't necessarily compare to exactly 0 or 1, for example.
Int32 cores solve this by outright doing what you'd expect with an integer value (no floating fuzziness, 1 will always equal 1, etc). So even the shaders that claim to be doing this already are technically faking it because your typical GPU can only calculate it as a float in the first place. This is a "fun" problem when dealing with colour palettes in restoring DOS era games.

This is all an educated guess of course, Nvidia haven't explained the sauce behind it yet, but I'm betting it's that.
Display scaling has nothing to do with SIMD processing as it happens on the video output which is actually outside of GPU pipeline even. It's possible that Turing chips have a level of flexibility in said outputs which simply isn't there in older GPUs. Still, it should be possible to do such scaling via shader processing (as anyone who ever used dgVoodoo2 should be aware of), and no, you don't really need dedicated INT support for this either (again, as can be seen from dgVoodoo2 running such scaling on any DX11 compatible GPU). Performance impact from such implementation can be noticeable though so NV is completely within their rights to not allow this as a generic solution on h/w where it would result in outright lower performance. Maybe they'll reconsider in time.

Personally, I think that with older games this isn't even an issue right now as you can run them through wrappers / emulators which allow you to have integer or integer+aspect (which is something absent from this driver) if you want to. The biggest and most interesting use case for integer scaling right now is running some modern game on a 4K screen in 1080p output and having a perfect 2x2 scaling - and in such cases a solution which results in considerable performance loss may not be an ideal one.