• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.
  • We have made minor adjustments to how the search bar works on ResetEra. You can read about the changes here.

RAWRferal

Member
Oct 27, 2017
1,361
London, UK
I can't even understand how this works so well. It doesn't make sense that you can have parity with an internal resolution that's not even Full HD. Can someone explain this in the simplest terms possible?
The AI is so trained in this specific task, that it essentially knows how to fill-in the gaps with a high degree of accuracy.

This applies to machine learning in general!
 
OP
OP
Tahnit

Tahnit

Member
Oct 25, 2017
9,965
Tahnit
I gave your 4k DLSS 720p idea a go and while it does work and may look fine on screenshots.
The thing you didn't mention is how much worse it is looking in motion than let's say 4k DLSS 1080p.
It does make sense, but I think you are waking a couple of false expectations here. DLSS is awesome, but internal render resolution is not something that can be tuned down without starting to loose significant in the image quality department, at least currently.

try 960p. that works even better.
 

Diablos

has a title.
Member
Oct 25, 2017
14,594
The AI is so trained in this specific task, that it essentially knows how to fill-in the gaps with a high degree of accuracy.

This applies to machine learning in general!
So it just boils down to machine learning...

Can next gen consoles do this? I.e. is it something AMD can implement without needing new hardware? Would be a shame if next gen consoles miss out on this
 

RAWRferal

Member
Oct 27, 2017
1,361
London, UK
So it just boils down to machine learning...

Can next gen consoles do this? I.e. is it something AMD can implement without needing new hardware? Would be a shame if next gen consoles miss out on this

I think AMD would need to invest heavily, but it's not impossible, especially if Nvidia's offering here keeps it's momentum. I wouldn't hold my breath though!

Helpful linky here btw: https://www.nvidia.com/en-gb/geforce/news/nvidia-dlss-your-questions-answered/
 

spam musubi

Member
Oct 25, 2017
9,381
www.tweaktown.com

This new PlayStation 5 patent teases NVIDIA DLSS style technology

Sony's newly published patent teases NVIDIA Deep Learning Super Sampling (DLSS) technology on its next-gen PlayStation 5 console.

maybe it's still possible?

Patents always baffle me, this seems like bullshit considering deep image reconstruction has been a thing for years. How do Sony get to patent this?

Also the technology to do this isn't really mature enough outside of nvidia's ecosystem. I doubt anything will come out of this.
 

Kieli

Self-requested ban
Banned
Oct 28, 2017
3,736
So it just boils down to machine learning...

Can next gen consoles do this? I.e. is it something AMD can implement without needing new hardware? Would be a shame if next gen consoles miss out on this

You need specialized hardware. NVIDIA cards have Tensor cores which perform the calculations. I don't think AMD has anything that we know of that can do similar calculations if they've even bothered to train the models in the first place.
 

SapientWolf

Member
Nov 6, 2017
6,565
You need specialized hardware. NVIDIA cards have Tensor cores which perform the calculations. I don't think AMD has anything that we know of that can do similar calculations if they've even bothered to train the models in the first place.
You can still do a version of machine learning upscaling without tensor cores. It was implemented in Control before DLSS 2.0. It obviously doesn't look as good as 2.0 but it looks far better than it would without it.
 

Armaros

Member
Oct 25, 2017
4,901
You can still do a version of machine learning upscaling without tensor cores. It was implemented in Control before DLSS 2.0. It obviously doesn't look as good as 2.0 but it looks far better than it would without it.

There is a massive performance hit doing DLSS type computations without something like tensor cores. To the point the rewards are not worth the overall loss of normal GPU rendering.
 

BeI

Member
Dec 9, 2017
5,983
www.tweaktown.com

This new PlayStation 5 patent teases NVIDIA DLSS style technology

Sony's newly published patent teases NVIDIA Deep Learning Super Sampling (DLSS) technology on its next-gen PlayStation 5 console.

maybe it's still possible?

Since the XsX has some level of machine learning, I'd guess it's likely the PS5 will have a lesser amount of it because it's the same gpu architecture. So "DLSS" probably could be done, but it would eat up more ms per frame and probably be less worthwhile. The 3080 has something like ~10x the machine learning performance of the XsX, if I read the specs right.
 

jotun?

Member
Oct 28, 2017
4,500
What does it take from the developer side to implement DLSS? Can anyone do it, or do you have to work with Nvidia to get your game added to their training/drivers?
 

Deleted member 14927

User requested account closure
Banned
Oct 27, 2017
648
Since the XsX has some level of machine learning, I'd guess it's likely the PS5 will have a lesser amount of it because it's the same gpu architecture. So "DLSS" probably could be done, but it would eat up more ms per frame and probably be less worthwhile. The 3080 has something like ~10x the machine learning performance of the XsX, if I read the specs right.

I thought that one of the PS5 hardware archtects had confirmed no ML hardware in the console via a leaked conversation?

I also seem to remember reading somewhere that the XsX ML hardware was a request from the Azure team, so unsure if it can be utilised for game upscaling.
 

Deleted member 1594

Account closed at user request
Banned
Oct 25, 2017
5,762
What annoys me about DLSS is that it's very picky about what resolutions it will work for. If I want to use it with my monitor's 4K equivalent (5160x2160), DLSS wont work. But If I switch to 3840x2160, it works just fine. But I don't want to play in 16:9 :(
 

Csr

Member
Nov 6, 2017
2,031
www.tweaktown.com

This new PlayStation 5 patent teases NVIDIA DLSS style technology

Sony's newly published patent teases NVIDIA Deep Learning Super Sampling (DLSS) technology on its next-gen PlayStation 5 console.

maybe it's still possible?

This is about scanning objects with a camera and it has nothing to do with DLSS, there was a thread about this with the same misconception and a false title which was corrected later.

https://www.resetera.com/threads/so...scanning-see-threadmark.258075/#post-41091537
 
Last edited:

Vuze

Member
Oct 25, 2017
4,186
What annoys me about DLSS is that it's very picky about what resolutions it will work for. If I want to use it with my monitor's 4K equivalent (5160x2160), DLSS wont work. But If I switch to 3840x2160, it works just fine. But I don't want to play in 16:9 :(
Say what now? That's my resolution as well :( Is this even true for 2.0 games?
 

UltraMagnus

Banned
Oct 27, 2017
15,670
This is control 1440p DLSS from only 576p that someone put up on Reddit (guy has also included a 2888p comparison for fun).


He says in motion he can't notice a difference. The detail on the jacket gets a tad softer in DLSS mode but otherwise it's pretty close.
 

Deleted member 1594

Account closed at user request
Banned
Oct 25, 2017
5,762
Say what now? That's my resolution as well :( Is this even true for 2.0 games?
I tried it in control, which has DLSS 2.0. It's disabled at that resolution.

5160x2160. Notice how render resolution is locked even though DLSS is selected.
YQVAWxR.png


3840x2160. Works fine.
5ymApu2.png
 
Last edited:

Tagyhag

Member
Oct 27, 2017
12,507
This is control 1440p DLSS from only 576p that someone put up on Reddit (guy has also included a 2888p comparison for fun).


He says in motion he can't notice a difference. The detail on the jacket gets a tad softer in DLSS mode but otherwise it's pretty close.

That is insane, from 64 FPS to 115 FPS.

I can't even imagine what DLSS 3.0 is going to be like.
 

samred

Amico fun conversationalist
Member
Nov 4, 2017
2,586
Seattle, WA
I continue to struggle with DLSS enthusiasm, almost entirely because of how it fails in Death Stranding. Nvidia was very careful to use footage in the game's city zones to show off how DLSS 2.0 reconstructs content in those areas, and does so quite well. But as soon as you get into any predominantly organic landscape (which is a high percentage of the game), and then add rain or other particle effects (particularly in cut scenes), DLSS 2.0 loses a ton of detail. I just checked this again last night (for no reason at all) and it's startling how much detail is lost in an average rainstorm once DLSS is turned on.

I fully believe DLSS will be able to account for a wider range of 3D rendering scenarios. But for now, it's an uncanny valley situation for me. As soon as you lose *any* detail that a game creator intended to be visible for the sake of atmosphere, then I'm out--and impressively reconstructed textures and signs don't make up for this.
 

GuEiMiRrIRoW

Banned
Oct 28, 2017
3,530
Brazil

SiG

Member
Oct 25, 2017
6,485
I'm eager to see if DLSS can be implemented in a portable form factor. Perhaps it will take Nvidia a new node...or perhaps they are working or already have a new SoC prepared. I'm curious to see if Nintendo will implement it on their next system but they are best positioned to do so, but we don't know if they'll break a deal with it.
 

Tagyhag

Member
Oct 27, 2017
12,507
The reddit user should take a picture like that in 576p native no DLSS so people can understand just how big of a difference it is.
 

ArchedThunder

Uncle Beerus
Member
Oct 25, 2017
19,068
all good points, but I think this poster sums it up well:



+1
Again, RT is less impressive right now because it's still early days and RT in games right now is just bolted on, more software optimizations and hardware improvements need to be made. Look at the marble demo from the recent Nvidia stream to get a taste of the end goal.

Just saying "DLSS is more impressive" because of where RT currently is is kind of missing the forest for the trees.
 

Pargon

Member
Oct 27, 2017
12,020
If this will turn into some sort of GPU brand feature war it'll just SUCK. Like, this game has AMD marketing and you will only be able to use the (inevitable) AMD equivalent of DLSS with your AMD card and get x-times the performance compared to an Nvidia card.. On the other hand, this other game has Nvidia marketing...
Ideally DirectML will be competitive, and that should be vendor-agnostic.
At least that way, even if it's not quite as good as DLSS, you get something rather than nothing at all, in AMD-sponsored games.

I'm not sure if 2.0 can officially scale above 4K currently - though there's not really any reason why it shouldn't be able to.
2.1 adds a 9x scaling option which makes that easier to render by enabling 1440p to 8K scaling rather than only 2160p to 8K.

I can't even understand how this works so well. It doesn't make sense that you can have parity with an internal resolution that's not even Full HD. Can someone explain this in the simplest terms possible?
In the absolute most basic way, you train an AI on edge patterns.
It knows that when it sees a low resolution pattern which follows this pixel structure and brightness:
lowresline-8fjhm.png


The original high resolution image it was derived from looked like this:
highresline-ouk3o.png


And then the AI tries to transform the top image to look more like the bottom one.
This works because the AI is trained on huge datasets which compares very high resolution images against the exact same image rendered at a low resolution.
So it learns the difference that resolution makes to an otherwise identical scene, and figures out the way to reverse it; starting with a low resolution input and turning it into a high resolution output.

Of course it is far more complicated than that, but that's the most basic way I can think to explain it.

There is a massive performance hit doing DLSS type computations without something like tensor cores. To the point the rewards are not worth the overall loss of normal GPU rendering.
DLSS runs at the end of the frame. You can't reconstruct an image before the low resolution input has been created first.
Because of that, I think the requirement for tensor cores is overstated. Tensor cores run the reconstruction faster, but it's not stealing resources away from the rest of the GPU to run this type of reconstruction on shaders like RDNA 2.0 is said to - they have already finished most of their work.

This is control 1440p DLSS from only 576p that someone put up on Reddit (guy has also included a 2888p comparison for fun).


He says in motion he can't notice a difference. The detail on the jacket gets a tad softer in DLSS mode but otherwise it's pretty close.
The thing is that DLSS builds up the image over multiple frames - at least eight of them - so if you're standing still it can do a fantastic job reconstructing the image to look just like native.
The lower the base resolution is, the more resolution you lose as soon as things start moving.
Now in some respects this is ideal for modern displays, since they blur the image as soon as anything is moving. But I do wonder whether this aspect of DLSS would be far more noticeable on an OLED running at 120Hz with BFI enabled for example. That display would have significantly less motion blur, and be more revealing of this aspect of DLSS, while a sample-and-hold LCD monitor will blur the image so much you may not notice it.

As soon as you lose *any* detail that a game creator intended to be visible for the sake of atmosphere, then I'm out--and impressively reconstructed textures and signs don't make up for this.
What are your thoughts on anti-aliasing? Particularly TAA.
 

Vuze

Member
Oct 25, 2017
4,186
I tried it in control, which has DLSS 2.0. It's disabled at that resolution.

5160x2160. Notice how render resolution is locked even though DLSS is selected.
YQVAWxR.png


3840x2160. Works fine.
5ymApu2.png
Damn, massive bummer... they really need to address this. I thought arbitrary resolution DLSS support was one the major features of 2.0, but I might misremember. Wasn't following the development too closely. I'm fairly sure I've seen ultra wide DLSS footage though, so I doubt/hope it's not the AR
 

Firefly

Member
Jul 10, 2018
8,634
I tried it in control, which has DLSS 2.0. It's disabled at that resolution.

5160x2160. Notice how render resolution is locked even though DLSS is selected.
YQVAWxR.png


3840x2160. Works fine.
5ymApu2.png
Your resolution might be too high but 3440x1440 21:9 works with DLSS in Control according to this tweet.



Would addressing this be on Nvidia or Remedy?
 

Deleted member 76797

Alt-Account
Banned
Aug 1, 2020
2,091
Patents always baffle me, this seems like bullshit considering deep image reconstruction has been a thing for years. How do Sony get to patent this?

Also the technology to do this isn't really mature enough outside of nvidia's ecosystem. I doubt anything will come out of this.

I think potentially PS5 Pro and Xbox Series X-2 will have something similar. AMD has to see what's going on and is probably working on their own solution. But on the other hand Nvidia have been cooking this up for a while and dumping tons of money into it's r&d.

I hope AMD is working on it because it's gonna be absolute bullshit if it turns that all the AMD partnered PC games don't support DLSS.
 

Detail

Member
Dec 30, 2018
2,947
I continue to struggle with DLSS enthusiasm, almost entirely because of how it fails in Death Stranding. Nvidia was very careful to use footage in the game's city zones to show off how DLSS 2.0 reconstructs content in those areas, and does so quite well. But as soon as you get into any predominantly organic landscape (which is a high percentage of the game), and then add rain or other particle effects (particularly in cut scenes), DLSS 2.0 loses a ton of detail. I just checked this again last night (for no reason at all) and it's startling how much detail is lost in an average rainstorm once DLSS is turned on.

I fully believe DLSS will be able to account for a wider range of 3D rendering scenarios. But for now, it's an uncanny valley situation for me. As soon as you lose *any* detail that a game creator intended to be visible for the sake of atmosphere, then I'm out--and impressively reconstructed textures and signs don't make up for this.

As someone who is still on a 1080ti do you have any examples of this in pictures? I have been very skeptical of DLSS and one of my major concerns has been loss of detail over native resolutions that will be ignored because of the improved performance.

The images in OP show lack of detail and more aliasing and it's very clear (even for someone with poor eyesight like me) so is this a regular occurance with DLSS or just an anomaly and DLSS really is this magic sauce that makes games look better than native 4k with double the performance?
 

RAWRferal

Member
Oct 27, 2017
1,361
London, UK
I continue to struggle with DLSS enthusiasm, almost entirely because of how it fails in Death Stranding. Nvidia was very careful to use footage in the game's city zones to show off how DLSS 2.0 reconstructs content in those areas, and does so quite well. But as soon as you get into any predominantly organic landscape (which is a high percentage of the game), and then add rain or other particle effects (particularly in cut scenes), DLSS 2.0 loses a ton of detail. I just checked this again last night (for no reason at all) and it's startling how much detail is lost in an average rainstorm once DLSS is turned on.

I fully believe DLSS will be able to account for a wider range of 3D rendering scenarios. But for now, it's an uncanny valley situation for me. As soon as you lose *any* detail that a game creator intended to be visible for the sake of atmosphere, then I'm out--and impressively reconstructed textures and signs don't make up for this.
Meh, I suppose those who can afford to be on the cutting-edge all the time could hold this opinion.

Given that we only have one generation of hardware currently released which supports DLSS, I do wonder how much of a boon it will be for those who can only upgrade infrequently - in-terms of maintaining playable frame rates and some level of eye-candy on new game releases years down the line. I think those people will give significantly less of a rat's ass about missing a few details.

Plus, given the significant improvement between DLSS and DLSS 2.0, it's not unreasonable to expect the technology to mature even more in the future - although there will always be limits to the minimum viable base resolution (you can't enhance something that isn't there to begin with!)
 

SapientWolf

Member
Nov 6, 2017
6,565
There is a massive performance hit doing DLSS type computations without something like tensor cores. To the point the rewards are not worth the overall loss of normal GPU rendering.
Control had a shader version of DLSS that didn't incur a massive performance hit before the upgrade to DLSS 2.0. There's more than one way to do ML supersampling. It didn't look as good as 2.0 but it looked far better than simple upscaling.

I think the biggest hurdle will be training the models and Nvidia has a huge head start in terms of infrastructure.
 

SiG

Member
Oct 25, 2017
6,485
Control had a shader version of DLSS that didn't incur a massive performance hit before the upgrade to DLSS 2.0. There's more than one way to do ML supersampling. It didn't look as good as 2.0 but it looked far better than simple upscaling.

I think the biggest hurdle will be training the models and Nvidia has a huge head start in terms of infrastructure.
The previous shader version did have limits though, such as constant ghosting and artifacts near meshed surfaces.
I hope AMD is working on it because it's gonna be absolute bullshit if it turns that all the AMD partnered PC games don't support DLSS.
Aren't there actually PC titles that were AMD partnered but later integrated DLSS? Or am I thinking something else? (Tomb Raider Reboot sequels)

Also I wonder why a huge number of people here are wishing it for the other big two consoles when the Switch stands to benefit the most?
 

Sean Mirrsen

Banned
May 9, 2018
1,159
Its not an extra chip. It is just an execution block inside the GPU. Doing this off die would absolutely kill any efficiency gains for anything but pure inference compute.
Maybe not an "extra chip", but Tensor architecture is specifically optimized for the kind of sparse neural networks Nvidia's Deep Learning networks use. It's basically an AI coprocessor inside the GPU. Without that optimization, and without specific support for INT32 operations rather than FP32, a generic GPU or CPU would waste too much power trying to run this DLSS process.
 

catpurrcat

Member
Oct 27, 2017
7,790
Again, RT is less impressive right now because it's still early days and RT in games right now is just bolted on, more software optimizations and hardware improvements need to be made. Look at the marble demo from the recent Nvidia stream to get a taste of the end goal.

Just saying "DLSS is more impressive" because of where RT currently is is kind of missing the forest for the trees.


Oh i hear ya (and very much appreciate the video demo, I've seen it before but could have used a refresher!). Honestly it still doesn't impress me or clearly others in this thread like DLSS does, which is where I would hope they are putting the vast majority of their development resources.

That's why I say DLSS is "better", Which I do apologize for because that's really not the right adjective I should've used. I don't mean to be flippant, but it's just a matter of resource allocation and the benefits I am personally seeing from it as a consumer.

RT seems to be solving a problem I didn't know we even really had. Whereas DLSS will make quality of life improvements for so many more people right now.

Death Stranding 4K with DLSS on PC is this year's GOAT technology, like how good Gsync was on Doom 2016 or pretty much any FPS when it became affordable and easily implemented really fast, on almost any rig.

RT reminds me a lot of HDR at the beginning of this generation.

People arguing over whether they could see the HDR difference, versus those who couldn't.

Now years later we know the culprits were of course panel lottery in peoples TVs, and people not calibrating their TVs properly. But the worst offender of all, was the fact that there were so many variations of HDR out there, thanks to nonsense marketing language used to confuse consumers.

"Effective" HDR
"True" HDR
HDR compatible
Fake HDR vs real HDR
Poorly implemented HDR
HDR10 vs Dolby Vision
HDR400 on 1440p PC monitors (ugh!)

IMO HDR was like RT in that the same arguments were used back then, that it was ahead of its time and we should all get a TV with HDR so we can be future proof.

Now we have RT, which needs DLSS to run on any sort of playable frame rate for people below a 2080 TI. And I await the inevitable arguments when 3000 series is released between people saying "it's not real ray tracing if you're using upsampling DLSS versus those going with native resolutions!"

additionally, there's going to be a AMDs take on ray tracing. Which will encourage more arguments because it involves console warrior fanboys. Not to mention RDNA2 which starts the whole team red vs team green and who does RT better (...meanwhile games will have varying implementation of it just like HDR)

Just like HDR, the RT implementation market will vary between:
a.) different players (nvidia vs amd vs console implementation based on amd vs movie special effects industry)
b.) The framerate players find accessible to their rig + tv/monitor
c.) marketing buzzwords to further confuse the gaming consumer from software/games marketing people, leading to disappointment when ray tracing is not implemented the way it was marketed to be in certain games...

Once again though, I do appreciate that you've made really solid points and your points are well taken. Lots of times discussions about these issues devolve into name calling, and I really appreciate how you brought evidence to this. I'm actually sharing the video above you shared re RT with a few of my friends as we speak. :)
 

aett

Member
Oct 27, 2017
2,027
Northern California
Does anyone know if there are plans to add DLSS to other games that are already released, or is it something we should just be seeing on (some? most? all?) titles moving forward?

I just learned about it yesterday and reinstalled Control, which didn't run very smoothly for me at launch and when I upgraded to a 1440p monitor and tried that resolution, it was understandably even worse. But with DLSS turned on, I can run the game on High settings and get a steady 60 fps. It's insane.
 

tuxfool

Member
Oct 25, 2017
5,858
Maybe not an "extra chip", but Tensor architecture is specifically optimized for the kind of sparse neural networks Nvidia's Deep Learning networks use. It's basically an AI coprocessor inside the GPU. Without that optimization, and without specific support for INT32 operations rather than FP32, a generic GPU or CPU would waste too much power trying to run this DLSS process.
I should point out that that the CUs do support int32. Now for NN we are actually talking about even lower precision, either fp16 or int8.

The Ampere Tensor cores support sparsity, but the Turing era cores do not.

They seemingly managed fine with just shader cores with dlss 1.9. Results weren't as good, but it worked fine.
 
Last edited:

Sprat

Member
Oct 27, 2017
4,684
England
I'd like to see it in person as screenshots seem like they have an aggressive sharpening filter.

But I will never buy nvidia again after I had 3 Geforce fx crap the bed in the space of 6 months
 

Diablos

has a title.
Member
Oct 25, 2017
14,594
Ideally DirectML will be competitive, and that should be vendor-agnostic.
At least that way, even if it's not quite as good as DLSS, you get something rather than nothing at all, in AMD-sponsored games.


I'm not sure if 2.0 can officially scale above 4K currently - though there's not really any reason why it shouldn't be able to.
2.1 adds a 9x scaling option which makes that easier to render by enabling 1440p to 8K scaling rather than only 2160p to 8K.


In the absolute most basic way, you train an AI on edge patterns.
It knows that when it sees a low resolution pattern which follows this pixel structure and brightness:
lowresline-8fjhm.png


The original high resolution image it was derived from looked like this:
highresline-ouk3o.png


And then the AI tries to transform the top image to look more like the bottom one.
This works because the AI is trained on huge datasets which compares very high resolution images against the exact same image rendered at a low resolution.
So it learns the difference that resolution makes to an otherwise identical scene, and figures out the way to reverse it; starting with a low resolution input and turning it into a high resolution output.

Of course it is far more complicated than that, but that's the most basic way I can think to explain it.


DLSS runs at the end of the frame. You can't reconstruct an image before the low resolution input has been created first.
Because of that, I think the requirement for tensor cores is overstated. Tensor cores run the reconstruction faster, but it's not stealing resources away from the rest of the GPU to run this type of reconstruction on shaders like RDNA 2.0 is said to - they have already finished most of their work.


The thing is that DLSS builds up the image over multiple frames - at least eight of them - so if you're standing still it can do a fantastic job reconstructing the image to look just like native.
The lower the base resolution is, the more resolution you lose as soon as things start moving.
Now in some respects this is ideal for modern displays, since they blur the image as soon as anything is moving. But I do wonder whether this aspect of DLSS would be far more noticeable on an OLED running at 120Hz with BFI enabled for example. That display would have significantly less motion blur, and be more revealing of this aspect of DLSS, while a sample-and-hold LCD monitor will blur the image so much you may not notice it.


What are your thoughts on anti-aliasing? Particularly TAA.
That is fascinating... it's basically rendering what was lost — in real time — and adding it to the frame...
 

laxu

Member
Nov 26, 2017
2,782
What annoys me about DLSS is that it's very picky about what resolutions it will work for. If I want to use it with my monitor's 4K equivalent (5160x2160), DLSS wont work. But If I switch to 3840x2160, it works just fine. But I don't want to play in 16:9 :(

DLSS 2.0 does work in any resolution but it's the UI in Control that is shit. Edit renderer.ini and you can input your desired values for the settings.

To give you an idea how shit the Control resolution handling is, it does not support super ultrawide resolutions out of the box or scale them correctly. A 3rd party patcher fixes this and after that the game is actually one of the best working super ultrawide resolution games with no FOV distortion issues.