The most interesting single tidbit was when Jensen said "Using the same methodology as Microsoft Xbox" in reference to ray triangle intersections on the RT core.
I wonder if this is a general comment regarding RDNA2, or if they're actually doing things slightly different to RDNA 2 and PS5 at Xbox.
One the same page.....
Doesn't RDNA2 share resources to do RT with rasterization. Ampere sounds better from hardware cause despite concurrency of scheduling they still give enough performance for rt, rastersization, and tensor. Not knocking RNDA2 but hardware ampere is something fierce. I'm jealous that networking architecture for queues or services overall aren't as great as gpu designs are for graphics. The only solution forward is to make sure all fundamental are strong considering doing everything all at once is how rt seems to require things to be done well and fast. Streamlining will be a bitch but solid performance gains can be had now. PBR forced changes but this is gonna get the ball rolling on 2 different aspects of the rendering pipeline that haven't much tangible gains in a while.
We basically went from RT at low single/double fps territory and nowhere near fucking close to 4k to an 8k monsters. 3090 is making 8k viable. For stability we will always gut res and dynamic res + dlss is just a concept I'm wrapping my head around. So consoles have establish some form of rt performance in 2060 range and this the 3090 has potential to be one of the greatest increases or the greatest. Most other cards up until this point have been just raster that card does, raster, rt, and tensor.
hnnnnng.
I hate going conspiracy but today is so full of indirect confirmation I can barely get it in me to sleep.
I think that is fairly likely considering the perf/cost of the 3070/3080.
Once you oc them bitches to 2ghz, where are the gifs?