It really depends on the game.. and how sensitive you are to input latency/response time. If you've never worried about it before, you probably shouldn't start now. In the case of these games, between "Off" and "Ultra" there's a frame or 2 worth of latency (16ms = 1 frame @ 60hz) so it could be a noticeable improvement to you. If you're not sure. Just set it to "On" and forget about it.
So if youre hitting 60-100 fps, its safe to turn on without much performance impact? Can't I use v-sync with this option and basically have delay free, screen tear free 60 fps gaming?True true.
It really depends on the game.. and how sensitive you are to input latency/response time. If you've never worried about it before, you probably shouldn't start now. In the case of these games, between "Off" and "Ultra" there's a frame or 2 worth of latency (16ms = 1 frame @ 60hz) so it could be a noticeable improvement to you. If you're not sure. Just set it to "On" and forget about it.
I honestly can't answer that. The drivers just came out so I don't know if there's any negative effects to leaving it on Ultra for all games. I always have set Max PR Frames to 1 and never had any issues. That's equivalent to the On setting. Which is why I recommended it.so why not just leave it at Ultra to play it safe?...is there any negative effect to leaving it always on Ultra for all games?...with previous Nvidia drivers I always left the 'Maximum Pre-Rendered Frames' option at 'Use the 3D Application Setting'
Basic features? Uh, which GPU vendors offer this basic feature?The power of turing... Nvida trying to limit basic features like this just make me want AMD. They did finally allow freesync and I'm grateful but moves like this put me off. Thankfully I'm not locked into their hardware with an expensive Gsync monitor.
I honestly can't answer that. The drivers just came out so I don't know if there's any negative effects to leaving it on Ultra for all games. I always have set Max PR Frames to 1 and never had any issues. That's equivalent to the On setting. Which is why I recommended it.
I guess set it at Ultra and know that you'll have the lowest latency possible. If you notice any issues in any game that weren't there before, then Switch it to on and see if that fixes the problem. If it does, then just leave it on.
Off = 'Use the 3D Application Setting' (meaning the game engine will queue 1-3 frames) This is the default and hasn't changed.if Max_Prerendered_Frames= 1 is the preferred setting (which is the equivalent of Low Latency= On) then why is the default setting to Off with these new drivers?...so what was the equivalent of Maximum Pre-Rendered Frames'= 'Use the 3D Application Setting' equivalent to with the older drivers- Low Latency= Off?
if Max_Prerendered_Frames= 1 is the preferred setting (which is the equivalent of Low Latency= On) then why is the default setting to Off with these new drivers?...so what was the equivalent of Maximum Pre-Rendered Frames'= 'Use the 3D Application Setting' equivalent to with the older drivers- Low Latency= Off?
Performance can be lowered in some situations (because the game doesn't have a queue of frames anymore, it's going to be more affected by frame-to-frame variance).Confused... should we enable these new options by default? What's the negative impact by doing so?
What CPU do you have? If you're CPU limited BF5 will be a stutterfest.Battlefield V DX12 performance is dogshit for me over DX11 anyway. On my 1080 Ti it just introduces a ton of stutter. DX11 is measurably smoother.
I'm interested to see the real world variables of ultra low latency. Turning off future frame rendering in BFV's menu borderline breaks the game, cutting a huge chunk out of the framerate.
If AMD's integer scaling is limited to it's newest cards I'd be shocked. Nvidia though tend to try to sell it's newer stuff or just limit things to sell other stuff. To tell you the truth I'm not worried about the feature but just them limiting it to turing cards. They put RTX on 10 series because they want devs to start using it more, yet this has to be limited to turing? Maybe it'll come to others later I don't know, but right now it doesn't look good to me.Basic features? Uh, which GPU vendors offer this basic feature?
What CPU do you have? If you're CPU limited BF5 will be a stutterfest.
Basic features? Uh, which GPU vendors offer this basic feature?
Integer scaling applied fine on my LG C8 OLED.Anyone else with a 4k TV unable to set the integer setting without it resetting to aspect ratio?
I'll have to mess with my nvidia settings in the morning then thanks
It's not supported yet.
As some food for thought in regards to integer scaling being restricted to Turing cards (whether you like it or not or not is up to you, but I haven't seen anybody discuss why this is on a technical level yet so I figured I'll throw this information out there), it's most likely the reason why is that they are utilizing the Int32 cores of said Turing cards, something that Pascal and cards before it simply do not have.
It's not supported yet.
Nvidia is the only company that supports it currently.
Nah. I have the new driver lolIf you want to be that technical, as far as I can tell, neither driver is currently out. :-p
Oddly enough tested this out in Far Cry New Dawn. Frames went down from 80 ish to 60-70 on ultra. Gtx 2070 Max Q. Any thoughts / tips?
In her Twitter video, Lisa explained that this feature will only be available on upcoming Gen 11 graphics and beyond - previous GPUs lack the hardware required for implementing integer scaling.
cool insights, tyAs some food for thought in regards to integer scaling being restricted to Turing cards (whether you like it or not or not is up to you, but I haven't seen anybody discuss why this is on a technical level yet so I figured I'll throw this information out there), it's most likely the reason why is that they are utilizing the Int32 cores of said Turing cards, something that Pascal and cards before it simply do not have.
Traditionally, trying to do integer processing on a GPU produces fuzzy results because what integers actually did in shaders requires processing them as floats regardless, so even when comparing what you'd think was integers wouldn't necessarily compare to exactly 0 or 1, for example.
Int32 cores solve this by outright doing what you'd expect with an integer value (no floating fuzziness, 1 will always equal 1, etc). So even the shaders that claim to be doing this already are technically faking it because your typical GPU can only calculate it as a float in the first place. This is a "fun" problem when dealing with colour palettes in restoring DOS era games.
This is all an educated guess of course, Nvidia haven't explained the sauce behind it yet, but I'm betting it's that.
Looks like the driver is back up on Nvidia's site, however it's the standard driver. Looks like I have the "DCH" driver, and they won't install for me because of that:
"Your system requires a DCH driver package which can be downloaded automatically with Geforce Experience"
*sigh*
Same here, I can't get it to stick. I'm not using any incompatible settings as far as I can tell. Just multiple monitors at different resolutions. Does that disqualify me?Keep messing around and interger scaling just won't activate, immediately reverts to no scaling or aspect ratio.
4K TV at native resolution, no dsr factors and YPbPr 422 which nvidia says is supported. Alas
Keep messing around and interger scaling just won't activate, immediately reverts to no scaling or aspect ratio.
4K TV at native resolution, no dsr factors and YPbPr 422 which nvidia says is supported. Alas
yeah i ain't touching this driver. i'll let others guinea pig it. too many big changes to expect it to work without problems.
Display scaling has nothing to do with SIMD processing as it happens on the video output which is actually outside of GPU pipeline even. It's possible that Turing chips have a level of flexibility in said outputs which simply isn't there in older GPUs. Still, it should be possible to do such scaling via shader processing (as anyone who ever used dgVoodoo2 should be aware of), and no, you don't really need dedicated INT support for this either (again, as can be seen from dgVoodoo2 running such scaling on any DX11 compatible GPU). Performance impact from such implementation can be noticeable though so NV is completely within their rights to not allow this as a generic solution on h/w where it would result in outright lower performance. Maybe they'll reconsider in time.As some food for thought in regards to integer scaling being restricted to Turing cards (whether you like it or not or not is up to you, but I haven't seen anybody discuss why this is on a technical level yet so I figured I'll throw this information out there), it's most likely the reason why is that they are utilizing the Int32 cores of said Turing cards, something that Pascal and cards before it simply do not have.
Traditionally, trying to do integer processing on a GPU produces fuzzy results because what integers actually did in shaders requires processing them as floats regardless, so even when comparing what you'd think was integers wouldn't necessarily compare to exactly 0 or 1, for example.
Int32 cores solve this by outright doing what you'd expect with an integer value (no floating fuzziness, 1 will always equal 1, etc). So even the shaders that claim to be doing this already are technically faking it because your typical GPU can only calculate it as a float in the first place. This is a "fun" problem when dealing with colour palettes in restoring DOS era games.
This is all an educated guess of course, Nvidia haven't explained the sauce behind it yet, but I'm betting it's that.
You should ask a Radeon VII owner what they think about RIS.The power of turing... Nvida trying to limit basic features like this just make me want AMD. They did finally allow freesync and I'm grateful but moves like this put me off. Thankfully I'm not locked into their hardware with an expensive Gsync monitor.