I have not heard one person say that.People seriously expect next gen to kill PC gaming.
Same dance, difference song, in every next gen console hype cycle.
I have not heard one person say that.People seriously expect next gen to kill PC gaming.
Same dance, difference song, in every next gen console hype cycle.
The XB1X uses something like 180w in a console half the size of the Xbox. It wouldn't surprise me if Xbox series X is designed for 300w. With a 140mm fan exhausting through the top, it should cool the console enough for such power consumption.I mean its like what, 250w just for this card? Isn't that more than the entire Xbox One X? You put something like this in a console and its the RROD saga all over again.
From what we're hearing the Series X will push beyond 300W
I like the contrast
The XB1X uses something like 180w in a console half the size of the Xbox. It wouldn't surprise me if Xbox series X is designed for 300w. With a 140mm fan exhausting through the top, it should cool the console enough for such power consumption.
They're a useful metric for researchers and academics. Not enthusiasts on a gaming forum. I disagree about the esoteric nature of the workloads. I think low occupancy workloads will become increasingly important as time goes on. Volumetric effects and ray tracing are both pretty hard to make high-occupancy. I get that you don't think there's a point to explaining it, but I think it's worth saying how pointless this whole discussion is.
Fine I'll satisfy your pedantry. TFs aren't useful here because there are so many performance characteristics that differ between a console and desktop GPU. This is the point of my earlier posts. When comparing two desktop GPUs, TFs may be a valid measure. They're still mostly marketing fuzz, but they can be useful if you know that your workload is (as said above) high-occupancy. This means the majority of operations are FLOPS. Machine learning is an applicable workload.
Comparing a console GPU and a desktop GPU with just TFLOPs is nonsense though. There's so much more going on. I'm sorry if saying "TFs don't mean anything" pricked you, but as far as this conversation is concerned, they don't. They're not useful and they just end up confusing people who think they know more than they do.
Leaks specs suggest they are around the 2080 super in power.Given the leaked specs I don't think its a surprise to anyone. They are going to be more around 2070 level right?
I mean, when you look at what is said in some threads and places, and given some reactions to this obvious statement...Imagine having to point out that a card this size is better than what you can fit into a console:
I wouldn't get my hopes up, but let's see.
YesWould it outperform a card due to customization? Don't consoles generally outperform their on - paper specs??
Me neither. Even if dev kits have this kind of performance I would be surprised to see it in the final retail release.
Well it's a prebuilt PC and I don't have the boxes that comes with the graphic card etc is it still sellable at a decent price without the official boxes?Can't you just get a new GPU down the road (sell and upgrade)? Pretty much anyone replacing/upgrading their system a few years after the consoles release will be able to get good value and be set for the rest of the gen - so maybe just count on that down the road. In the meantime just bump stuff down to 1080p or something, I'm curious how it'll all play out but there'll always be a solution around the corner.
Outside of "overpriced" these are also the first consumer cards with hardware accelerated rt support. Those GPU are packed with a lot of shit on die.
ill just wait for official news from Xbox and AMD.
(regardless the console in canada is gonna be what $600CAD max? a 2080 is closer to 1K lmao)
the 2080 super costs 800 euros, and that's a gpu alone. you can't be serious.
No... this gen consoles perform in line with their respective desktop gpu.Would it outperform a card due to customization? Don't consoles generally outperform their on - paper specs??
Modern dx12 and Vulkan + more easily usable hw (you can extract perf more easily these days, especially on NV) makes this old John Carmack quote you are thinking about not very relevant. In dx9 days? Sure.In raw performance, I am sure the 2080 could be better. However, the optimization for consoles goes a lot farther (just ask Jon Carmack) so the point is moot.
If it is anywhere in the neighborhood of the same base performance, plus a good dev cycle, you'll get a hell of a lot more performance per dollar.
I'm not talking about my expectations. All I'm saying is the most prevalent leaks and rumors suggest both consoles are above 12tf RDNA.the 2080 super costs 800 euros, and that's a gpu alone. you can't be serious.
Nvidia should be grateful for the new consoles. The new consoles will probably push more Ray Tracing in new game releases compared to currently.
There are still things you just can't do under dx12/vulcan and optimizing around single hw target is simply more cost effective, there's no way around that until we stop using humans for the work ;)Modern dx12 and Vulkan + more easily usable hw (you can extract perf more easily these days, especially on NV)
Oh my. This is gonna be fun to respond to.
Consoles cut back on Anistropic filtering cause it thrashes your cache. You have to do 4-8x memory loads than you had to do normally, removing your most recently used data that's saved for faster future runs.
Please don't be so pedantic. It's pointless for business reasons. You could push a shooter to 120FPS but it wouldnt' look good. It wouldn't go through HDMI. No TV would be able to display it, and no average consumer will be able to tell the difference. As a result, we prioritized things that the consumer would notice or has marketed to them (4k).
This is not at all what I said. PC doesn't have longer loading times. PC uses loading times to transfer as much data through the PCI-E bus that it can. Consoles use this time to fill up GDDR memory. I'm just explaining one technique desktops use for latency hiding.
It's a made up marketing termthat no one in the real world would use.. A GPU executes millions of tiny shader programs every second. If those shader programs consist of nothing but floating point operations, then TFLOPS might be more important. But they include logical operations, conditional operations, integer operations, and most importantly loading and storing from texture units (GDDR). It don't matter if your GPU is 24TFLOPS if all you have is loads and stores in your shader programs. You won't be doing any floating point ops.Some people do use it but not for comparing a desktop and console GPU
Haha., yeah. GCN on PC probably also loves the wavefront mapping already done on console working out so well under Vulkan :D, Turing, Pascal and such see less love there.There are still things you just can't do under dx12/vulcan and optimizing around single hw target is simply more cost effective, there's no way around that until we stop using humans for the work ;)
How much it all matters is definitely debatable though, even if you had a definite % number you can expect.
Yes. They always do. Every single generation. Look what they were doing with a measly 512MB of RAM at the end of last gen.
Would it outperform a card due to customization? Don't consoles generally outperform their on - paper specs??
They do not outperform their paper specs, they more closely come to utilising their paper specs.
VRS is available on PC (that is where it started on NV cards) and DRS is, likewise, available in PC titles and is a game per game thing, much like it is on console.It would seemingly outperform similar spec pc cards if it makes better use of its features, such as using dynamic resolution and variable rate shading, while pc is brute forcing resolution and shading.
Yes. They always do. Every single generation. Look what they were doing with a measly 512MB of RAM at the end of last gen.
They do not outperform their paper specs, they more closely come to utilising their paper specs.
Punching above weight is a phrase I really dislike, rather they leverage their weight.
VRS is available on PC (that is where it started on NV cards) and DRS is, likewise, available in PC titles and is a game per game thing, much like it is on console.
There is no doubt the 2080 is more powerful, but I would rather get an entire console instead of an overpriced GPU.
GPU-based pipelines are probably more relevant to the issue at hand, since they significantly reduce surface area exposure to the command buffer generation portion of the API. Memory management on PCs is still messy, and the compiler infrastructure is still far more opaque and difficult to deal with (PSO build time, code gen quality, intrinsics, etc.), so things aren't necessarily great still.Modern dx12 and Vulkan + more easily usable hw (you can extract perf more easily these days, especially on NV) makes this old John Carmack quote you are thinking about not very relevant. In dx9 days? Sure.
I'm not sure what you mean here (are you just talking about 64 thread dispatch granularity?), but if I were NV one threat I would consider is that rendering programmers think almost entirely in terms of GCN now. That said NV's devrel/architects are great folks, too, and we have excellent conversations with them fairly regularly.Haha., yeah. GCN on PC probably also loves the wavefront mapping already done on console working out so well under Vulkan :D, Turing, Pascal and such see less love there.
They do not outperform their paper specs, they more closely come to utilising their paper specs.
Punching above weight is a phrase I really dislike, rather they leverage their weight.
VRS is available on PC (that is where it started on NV cards) and DRS is, likewise, available in PC titles and is a game per game thing, much like it is on console.
Ok I concede. Regardless, we all gonna eat next gen.Nope. Consoles performed exactly as expected based on their specs, as was comprehensively proven during this generation.
Right. Its more accurate to say that consoles perform closer to their theo. max more often than PC parts. PCs have heavier abstraction that blunts some of the performance. Consoles have abstraction as well but it can be bypassed if a dev chooses which isn't a realistic option for PC developers who have to support a wide array of archs.They do not outperform their paper specs, they more closely come to utilising their paper specs.
Punching above weight is a phrase I really dislike, rather they leverage their weight.
VRS is available on PC (that is where it started on NV cards) and DRS is, likewise, available in PC titles and is a game per game thing, much like it is on console.
Yes. They always do. Every single generation. Look what they were doing with a measly 512MB of RAM at the end of last gen.
For whatever reason, probably dx11 and associated IHV drivers, I'd say this barely made a difference save for some specific instances.I'm not sure what you mean here (are you just talking about 64 thread thread granularity?)
"Laptops are the fastest growing gaming platform — and just getting started," said Jensen Huang, founder and CEO of NVIDIA, who introduced the lineup at the start of the annual CES tradeshow. "The world's top OEMs are using Turing to bring next-generation console performance to thin, sleek laptops that gamers can take anywhere. Hundreds of millions of people worldwide — an entire generation — are growing up gaming. I can't wait for them to experience this new wave of laptops."
In other news, water is wet and the sky is blue. No one with a shred of common sense would think next-gen consoles would outperform a 2080.
It's the same shit every console gen, people hype up/overestimate the "power" of next-gen hardware only to fall back to reality once specs come out.