• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.

Alexious

Executive Editor for Games at Wccftech
Verified
Oct 26, 2017
909
They're not separate things. DLSS is an advance form of checkerboarding using deep learning.

I think it's incorrect to say that. Checkerboarding is a very specific technique and DLSS works in a completely different way, though both have the goal to improve performance.
 

ara

Member
Oct 26, 2017
13,000
blob1434553367906.jpeg


Damnit the msi trio 2080ti is 327mm long... Just too long for the fractal define r5. Hopefully i can shift my hard drive cages enough out of the way.

Ah shit, I didn't realize they were that big. I have the same case and figured it's big enough to house anything.
 

Darktalon

Member
Oct 27, 2017
3,265
Kansas
they are all removable, I have no spinning drives in mine nor optic drives so I took everything in the front out
I have a blu ray drive i never use so i could take the top cage out, but i have to use the 5 drive cage sadly. Ill have to move it and measure, it still may not have enough clearance even if i move it all the way to the top or bottom. (these cards too dang thicc)
 

Pargon

Member
Oct 27, 2017
11,994
blob1434553367906.jpeg


Damnit the msi trio 2080ti is 327mm long... Just too long for the fractal define r5. Hopefully i can shift my hard drive cages enough out of the way.
Wow, that is a ridiculously large GPU.
I chose the Define R5 for my builds because it has direct airflow over the drives, and the number of drives that it can hold, so removing the cages is not an option for me.
 

daninthemix

Member
Nov 2, 2017
5,022
Wow, that is a ridiculously large GPU.
I chose the Define R5 for my builds because it has direct airflow over the drives, and the number of drives that it can hold, so removing the cages is not an option for me.

I have the Define R5 too, but I have no drive cages because just SSDs. So I should be good, right?
 

Pargon

Member
Oct 27, 2017
11,994
I have the Define R5 too, but I have no drive cages because just SSDs. So I should be good, right?
If you remove the drive cages that image says it will support a GPU up to 440mm in length, and the that specific 2080 Ti is 327mm.

EDIT: I'm not sure why they wrote it in small text (and it didn't format properly so it was tiny here), but that poster was referring specifically to an "MSI Trio" 2080 Ti.
The reference GPU is 267mm and will fit without removing the drive cages. I expect most third-party GPUs will too, but you'd have to check that they are under 310mm.
I thought anything over 300mm was a thing of the past.
 
Last edited:

Deleted member 21996

User requested account closure
Banned
Oct 28, 2017
802
https://www.youtube.com/watch?v=w9FtXZGQzfM

GamersNexus finally managed to get the HSF apart. Bad news: They used glue. Crazy cooler design though, can't wait to see how it performs compared to the traditional heatpipe designs.

That teardown is nuts. It's like a rabbit hole the amount of screws he ends up having to take out. I wonder if his joke about the screws accounting for the cost rather than performance meant anything. We'll see tomorrow!
 

LucidMomentum

Member
Nov 18, 2017
3,645
Anyone know which brand has the best reputation for gpu? I can't figure out which one to get.

Right now no, since they're all new designs for their coolers we're waiting on reviews to see jwich of the new cards works best.

However I've had good experiences with EVGA, MSI, and ASUS in the past.

I'm going with a stock NVIDIA founder's edition however, since my case is small I don't have much room to spare and I certainly cantc fit the 3 slot cards in lol.
 

RGV_Rage

Member
Nov 14, 2017
98
TX, USA
blob1434553367906.jpeg


Damnit the msi trio 2080ti is 327mm long... Just too long for the fractal define r5. Hopefully i can shift my hard drive cages enough out of the way.
Hadn't even checked on card length for mine yet. I ordered MSI Trio 2080, which is the same size and just checked my case supports up to 330mm card. 3mm to spare. Upgrading from a MSI GTX 980 279mm ->327mm nearly 20% longer.
 

plagiarize

Eating crackers
Moderator
Oct 25, 2017
27,511
Cape Cod, MA
That teardown is nuts. It's like a rabbit hole the amount of screws he ends up having to take out. I wonder if his joke about the screws accounting for the cost rather than performance meant anything. We'll see tomorrow!
I mean, it's going to make them more expensive to manufacture, so I'm sure it's part of it... but the real reason is the R+D and silicon for the RT cores. It's a massive chip remember. Build quality is a factor though from the sound of it.

My case is plenty roomy so I didn't even think to measure it. The two fan EVGA I've ordered (it's the 2.75 slot one) will fit easily in my case. Last case I had was a Shuttle (I was building a PC and knew I'd have to travel with it) and I got sick of having to dismantle the entire thing to take out a stick of RAM.
 

low-G

Member
Oct 25, 2017
8,144
Don't most power supplys come with an 8 and 6+2, on one chain, and at least two sets of those?
Nah, newer basic ones come with 2 8 pin.

But my Seasonic Gold semi modular 550W from just 3 years ago, comes with just the one 8 pin and either I misplaced the other cable (i have all the other cables) or it didn't come with one.

3 cables is insane!

Completely dependent on wattage from what I've seen. My 650 has 8+8, my 850 has more than that. Older PSUs I've had have only had 6, 8, etc...
 

Kuosi

Member
Oct 30, 2017
2,366
Finland
I have a blu ray drive i never use so i could take the top cage out, but i have to use the 5 drive cage sadly. Ill have to move it and measure, it still may not have enough clearance even if i move it all the way to the top or bottom. (these cards too dang thicc)
Damn dont think there's a way to keep that 5 drive and have space for a long card, can you even attach the cage at the very top? Or put 3 drives on the bottom cage, rest 2 somehow in the optic drive cage...
 
Nov 8, 2017
13,097
Lower internal resolution, but no checkerboard pattern or anything like that. Its method for generating the missing information is completely different.

I haven't read the full white paper, but my understanding was that the precise mehcanism by which the "new" pixels were generated was a product of machine learning and would vary from game to game. Speculation on my part, but It doesn't seem inconceivable that some kind of pseudo-checkerboarding may be occurring in certain titles, or perhaps all of them.

What I'm mainly unclear on is whether Nvidia has tightly controlled the problem space that the ML is searching in, so it would more or less be finding optimal variations on a common theme, or whether it's really broad and could be using wildly varying techniques that have little in common with each other.
 

Deleted member 1594

Account closed at user request
Banned
Oct 25, 2017
5,762
Had to make sure, it just seems like such a late date.
Oh, it is. Without a doubt. Delay the reviews as long as possible so people can't cancel their pre-orders before the cards actually launch, or grab the impatient people that just say "fuck it" and order one before the reviews are out anyways. I don't even plan to get one until next year and I'm still annoyed :P

More of a walkthrough of RTX using the Atomic heart engine. I've seen multiple trailers for this game... and this video looks like something that's actually not in the game and is just setup as a small tech demo/showcase for Nvidia. Still cool I guess.
 

ArnoldJRimmer

Banned
Aug 22, 2018
1,322
I haven't read the full white paper, but my understanding was that the precise mehcanism by which the "new" pixels were generated was a product of machine learning and would vary from game to game. Speculation on my part, but It doesn't seem inconceivable that some kind of pseudo-checkerboarding may be occurring in certain titles, or perhaps all of them.

What I'm mainly unclear on is whether Nvidia has tightly controlled the problem space that the ML is searching in, so it would more or less be finding optimal variations on a common theme, or whether it's really broad and could be using wildly varying techniques that have little in common with each other.

Do we have any examples of DLSS working? A screenshot or two for comparisons. We should be able to tell fairly easily if any sort of checkerboarding is taking place.

Speculating here as well, but I'm guessing it just doesn't work in the same way at all. I think the AI takes the fully rasterized frame and upscales it with magic AI that will probably one day will be upscaling us for lunch, but I digress. :p Unless it has access to other parts of the rendering pipeline (does it?) it couldn't do the type of interpolation checkerboading does.
 

dgrdsv

Member
Oct 25, 2017
11,846
I mean could DLSS and checkerboarding be used in combination for an even bigger performance gain whilst maintaining good IQ?
Sure. But it would be a waste to use checkerboarding on Turing when you have VRS and TSS to use instead of just plain checkeboarding.

Speculating here as well, but I'm guessing it just doesn't work in the same way at all. I think the AI takes the fully rasterized frame and upscales it with magic AI
Pretty much. AI just draws on top of a lower resolution image to make it look like a higher resolution one. Results will obviously differ.
DLSS 2x is a lot more interesting as it uses native resolution image and then DL AA it to make it look like a 64x SSAA image - considering that tensors are able to run in parallel with general FP processing this may result in somewhat of a "free supersampling".
 

gabdeg

Member
Oct 26, 2017
5,956
🐝

Little snippet of the upcoming raytracing benchmark from UL Benchmarks.

Metro looks so good with RT GI on. That switch at 0:48... now that's exactly my crap.
 

Durante

Dark Souls Man
Member
Oct 24, 2017
5,074
DLSS doesn't work like some of the theories here assume. We don't know much about it for certain, but we do know that it needs motion vectors so it's not purely image-based AI upsampling. (Which would probably look bad in motion)
 

plagiarize

Eating crackers
Moderator
Oct 25, 2017
27,511
Cape Cod, MA

Little snippet of the upcoming raytracing benchmark from UL Benchmarks.

Metro looks so good with RT GI on. That switch at 0:48... now that's exactly my crap.

It's interesting, because you can see the (current) performance hit for turning on RTX in Metro... but the difference in IQ is stark. I wonder how this compares in performance to VXAO, because I'd argue it's much better looking than VXAO (which itself had a large performance cost). The scene with the shutters opening and closing was pretty jaw dropping, but clearly RTX off is running *much* smoother than on. Hopefully gsync can clear that up. It's hard to estimate by eye just how far below 60 this is running with RTX on, but I don't believe it's dipping below 30.

Edit: I was right. DF threw their analysis tools at this footage and RTX on runs from around 45 to 60 fps.
 
Last edited:

Crooked Rain

Member
Oct 25, 2017
184
Got an EVGA RTX 2080 XC and XC Ultra pre-ordered on Amazon. Not sure which one I want to keep. Benchmarks can't come soon enough.
 

Vash63

Member
Oct 28, 2017
1,681
That teardown is nuts. It's like a rabbit hole the amount of screws he ends up having to take out. I wonder if his joke about the screws accounting for the cost rather than performance meant anything. We'll see tomorrow!

Yeah. The two aren't mutually exclusive - the gigantic vapor chamber is legitimately unique and potentially very good, but that doesn't mean too much of the cost isn't going to stupid screws. That many screws have to be a significant portion of the HSF's BoM.

Steve does mention though that due to the complete lack of airflow to the PCB it's possible that the tolerances are extremely low, everything probably has to be sub-millimeter to make sure every single IC that needs cooling is contacting the heat spreader somewhere.

DLSS doesn't work like some of the theories here assume. We don't know much about it for certain, but we do know that it needs motion vectors so it's not purely image-based AI upsampling. (Which would probably look bad in motion)

Yeah, it's mentioned that (similar to TAA) it uses temporal data, just instead of just anti-aliasing it's actually supposed to keep the detail from the previous frames also and use AI + motion vectors to know what detail to keep.

I think of something like a moving sword: If the sword is moving from left to right, with the motion vectors the AI will know it can use data of that sword's previously rendered output and transpose that on top of the new rendered output - essentially getting multiple samples in a single image. They key really will be in how they know what data is valid and what will need to be thrown away, so that specular highlights and such work properly even if the angles are changing slightly. Will be very interesting to see if there are any noticeable artifacts from the process.
 

dgrdsv

Member
Oct 25, 2017
11,846
DLSS doesn't work like some of the theories here assume. We don't know much about it for certain, but we do know that it needs motion vectors so it's not purely image-based AI upsampling. (Which would probably look bad in motion)
Or it just uses a previous frame to create AA for the current one in parallel to the current one being rendered.
 

low-G

Member
Oct 25, 2017
8,144

Little snippet of the upcoming raytracing benchmark from UL Benchmarks.


I knew they were gonna put out a raytracing benchmark, but

#1. Why are they making a benchmark that can run at a good framerate? This shit should run 10fps on a 2080Ti to be legit Futuremark. Is UL going to make them go soft?
#2. Much less impressive than Nvidia's several tech demos. Pretty simple visually.

Sure, it's a benchmark, but I remember ~15 years ago the new 3DMark was a major event. I know it can't be that way again, but even Time Spy looked real good, I think. I hope this is the 'low spec' benchmark.
 

LucidMomentum

Member
Nov 18, 2017
3,645
For you! I have yet to receive a shipping notification despite ordering 2 minutes into the presentation and using PayPal :(.

I'm banking on my Best Buy pick up in store date still holding. But we'll see.

I knew they were gonna put out a raytracing benchmark, but

#1. Why are they making a benchmark that can run at a good framerate? This shit should run 10fps on a 2080Ti to be legit Futuremark. Is UL going to make them go soft?
#2. Much less impressive than Nvidia's several tech demos. Pretty simple visually.

Sure, it's a benchmark, but I remember ~15 years ago the new 3DMark was a major event. I know it can't be that way again, but even Time Spy looked real good, I think. I hope this is the 'low spec' benchmark.

I'm sure there will be an Extreme version, just like Fire Strike.
 

brainchild

Independent Developer
Verified
Nov 25, 2017
9,478
DLSS doesn't work like some of the theories here assume. We don't know much about it for certain, but we do know that it needs motion vectors so it's not purely image-based AI upsampling. (Which would probably look bad in motion)

Yeah, it's mentioned that (similar to TAA) it uses temporal data, just instead of just anti-aliasing it's actually supposed to keep the detail from the previous frames also and use AI + motion vectors to know what detail to keep.

Or it just uses a previous frame to create AA for the current one in parallel to the current one being rendered.

So, like TAA? In which case DLSS would be yet another AA that doesn't work with SLI.


I think some of the confusion is the result of the training of the DNN for DLSS being separate from the current DLSS functions.

During training, the DLSS relied on 'back propagation' by dynamically weighting the iterative differences between 64x supersampled reference images and the respective raw inputs. After sufficient training however, the AI now 'knows' how to produce output that approximates the reference quality, and now functions by analyzing multiple inputs within a given set of frames, as described in the excerpt of the Turing whitepaper below:


To train the network, we collect thousands of "ground truth" reference images rendered with the gold standard method for perfect image quality, 64x supersampling (64xSS). 64x supersampling means that instead of shading each pixel once, we shade at 64 different offsets within the pixel, and then combine the outputs, producing a resulting image with ideal detail and anti-aliasing quality. We also capture matching raw input images rendered normally. Next, we start training the DLSS network to match the 64xSS output frames, by going through each input, asking DLSS to produce an output, measuring the difference between its output and the 64xSS target, and adjusting the weights in the network based on the differences, through a process called back propagation. After many iterations, DLSS learns on its own to produce results that closely approximate the quality of 64xSS, while also learning to avoid the problems with blurring, disocclusion, and transparency that affect classical approaches like TAA. In addition to the DLSS capability described above, which is the standard DLSS mode, we provide a second mode, called DLSS 2X. In this case, DLSS input is rendered at the final target resolution and then combined by a larger DLSS network to produce an output image that approaches the level of the 64x super sample rendering - a result that would be impossible to achieve in real time by any traditional means

So DLSS = Performance Mode, and DLSS 2X = High Quality Mode

And we can see here how effective the result is:

referencevsDLSSx2.png



The DNN AI also has to intelligently combine inputs based on its 'understanding' of the motion of the scene, instead of strictly following motion vector values like TAA, which gives us results like this:

TAAvsDLSSx2.png




Something else also worth mentioning is the AI Super Rez technology, which effectively increases resolution from a lower resolution sample by 'intelligently' inferring new pixel data to preserve detail:

AISuperRez.png


Though this seems to only work for video and pictures, not interactive media.


At any rate, I'm working on a thread that goes over all of the new features of Turing, as described in the whitepaper. Hopefully that will help consolidate all of the information we have on it so far.
 

plagiarize

Eating crackers
Moderator
Oct 25, 2017
27,511
Cape Cod, MA
Since DLSS is only on cards with nvlink, it would not be a problem in SLI. In fact taa should work just fine now too, the lack of bandwidth is why taa tanked framerates on sli.
Not the case. The 2070 cards have DLSS but no nvlink. Furthermore, Nvidia haven't really announced or gone into any details for what we can expect for nvlink beyond regular old SLI in high end gaming GPUs. Ideally you'll be able to have two GPUs appear as a single GPU to windows, but even with nvlink you're still going to hit bandwidth issues (since the bandwidth between GPU and onboard ram is much higher). Not to say it isn't a solvable problem, but I genuinely think we'll be moving away from AFR as a multi gpu solution. That's just my amateur opinion of course, but AFR's place in DX12 (which unless I'm mistaken every DXR title is going to be within) is of shrinking utility.
 

BriGuy

Banned
Oct 27, 2017
4,275
I wasn't expecting Nvidia to ship these out so soon. I thought my 2080 would arrive next Tuesday, but it's scheduled for delivery this Thursday.

The last time I changed a graphics card was when I replaced a TNT card for a Radeon something or other in 2001 (16 MB of RAM up to 32, baby). I hope I don't fuck this up.