I guess you didn't participate in gaming in 2013?It's been blown so far out of proportion lately that it has quickly become my least favorite term surrounding discussion of the industry.
I guess you didn't participate in gaming in 2013?It's been blown so far out of proportion lately that it has quickly become my least favorite term surrounding discussion of the industry.
Sure, but it's still not the be all end all metric.Well in this case both cars are so similar that the comparison can be made without any issues.
Teraflops is how many billions of floating point operations your GPU can handle a second. Each pixel on your screen, of which you might have ~8.3 Million, requires calculations to be displayed properly, and if you have complex shaders, you'll need it for that too. Now if you are running 60fps, you have ~500 Million pixels to shade a second. You have objects that need to be tracked to an exact location (ie to the fp32 decimal point). You have textures that need to know where to go and dozens of other important operations that require a floating point answer. The number Teraflop is an expression of how fast you can get the GPU to calculate these answers.
The big reason people down play it now is because of how bad AMD cards were at achieving their theoretical performance this generation. GCN cards often had much higher theoretical performance numbers than their Maxwell/Pascal Nvidia counterparts, but until vulkan (which was based on an AMD API, that gained traction via AMD's clout as the supplier for both home consoles for the western developer market), and continual improving their drivers, AMD always fell far behind in benchmarks. The AMD Vega 64 GPU for instance, has a 12TFLOPs performance number attached to it, but the 11TFLOPs GTX 1080 TI destroyed it, and for a lot of the generation, the 8TFLOPs GTX 1080 beat it easily.
Anyways, point is, Teraflops is important, it is a measure of how fast a GPU can go, but you can always hit bottlenecks and optimized code. There are other things important to a GPU as well, such as ROPs and Texture Units, but TFLOPs is overall, the most important measure of a GPU's performance, especially when comparing the same architectures.
I agree, but some people are acting as if it didn't matter at all.
It's pretty good when comparing the gpus inside both next gen machines (assuming that's why the thread was created).
It's a metric that has been used since forever. I remember full well the flops discussions when I bought my hd4850 12 years ago. Not sure why now people think this is new.
Of course, these things are *computers*. Everything they were invented for is to do math, fast. There is of course also data juggling and the memory pyramid everyone should know when talking about the topic but in the end, they are computing and the one who is faster in doing so...comparing computational power of chips from the same family, it's useful.
comparing different architectures it starts to muddy and real world performance benchmarks are more useful.
Sure we can compare them once we have the games and footages an all of that. But no matter what, it won't be the be all end all, cause theWell, how convenient the PS5 and the XSX have very similar architectures!
Only difference really is the memory configuration, but if they are fast enough it should matter that much.
Can you inflate teraflops through software?
Wasn't a member those days, but I was there 24/7!Were you a member on Neogaf at that time? I joined in 2013, it was a huge point of discussion. In fact, that's where the famous Albert Penello quote originated from.
Here
That 30% difference he's discussing? He's talking about the TF difference.
It has only gotten worse since 2013. The moment Microsoft started including the term in every trailer surrounding the 1X and now the Series X, it's like every forum poster suddenly works at Digital Foundry.
You're wrong. The PS4 gpu was widely understood to be "40% better" than the Xb1 gpu. In fact, it was such a prevalent topic of discussion that a Microsoft employee tried to downplay the TF difference.Wasn't a member those days, but I was there 24/7!
I remember some discussions about teraflops, but it was much less than these days in my opinion. But maybe I'm wrong.
Yes, as the same was done when durango (xbox one) had about 33% less tflops than orbis (ps4).I observed that the TF specification suddenly became less important at the very moment when the Playstation could not be No. 1 here. Coincidence?
It was a metric used on the Dreamcast, PlayStation 2, Xbox, GameCube. We all know why it is now being downplayed.It's a metric that has been used since forever. I remember full well the flops discussions when I bought my hd4850 12 years ago. Not sure why now people think this is new.
Inflate, no.
Yup, it's important metric for maximum throughput of ALU.It was an important metric when evaluating GPU performance. But console warriors have emptied it of meaning in the space of a few months.
Er, technically, yes? You can use simpler, less precise math. (Or numbers, rather.) It still requires the hardware to support that kind of thing, but that's exactly what stuff like FP16 (and whatever AMD calls their solution) is for, doubling the FLOPS at the cost of precision. There's also IOPS, which can work even faster by using integers rather than float-point numbers.
I remember sometime in the past we use "polygon". how much current Gen console can render polygon vs older Gen?
Well, how convenient the PS5 and the XSX have very similar architectures!
Only difference really is the memory configuration, but if they are fast enough it should matter that much.
Lol, I was just referring to how some cheap brand digital cameras upscale photos in software to give the impression they have more resolution than their actual sensor has but in the end I actually learned something.Er, technically, yes? You can use simpler, less precise math. (Or numbers, rather.) It still requires the hardware to support that kind of thing, but that's exactly what stuff like FP16 (and whatever AMD calls their solution) is for, doubling the FLOPS at the cost of precision. There's also IOPS, which can work even faster by using integers rather than float-point numbers.
exactly this.I observed that the TF specification suddenly became less important at the very moment when the Playstation could not be No. 1 here. Coincidence?
That's because PS4/One was the first console generation to go past 1 TF 🤷♀️I don't know if it's that important or not, but I remember there weren't much discussions about teraflops in consoles like this before.
yeah basically. there was so much talk and speculation about it, like 10+ threads each 400 pages, but then after the specs for both systems were revealed it became meaningless and not very important.I observed that the TF specification suddenly became less important at the very moment when the Playstation could not be No. 1 here. Coincidence?
It's funny that I made that argument in 2013 already but nobody cared :)And even then, that's theoretical peak performance. How that actually translates to real performance, taking into account the system's full architecture as a whole is another story which is why the 'discussions' surrounding TF here make me cringe so hard. Most people's knowledge stops and starts at the fact they know one number if higher than the other.
So yeah, definitely important, but nowhere near the most important metric to judge a system's performance.
I guess they are important for some when you need to quantify the performance of a graphic card.
What is strange for me is that in practice Nintendo make games for console that has like a quarter of a teraflop but somehow manages to make them run at 60/1080 and look good at the same time. So what are all those teraflops used for in the other consoles?
Link ?I honestly don't know, but Steve from GamersNexus (excellent PC hardware centric gaming Youtube channel) paid it almost no mind when he made a video after Cerny's PS5 tech talk.
PS4 had 50% more TF than One and was on average able to churn out about 50% more pixels (the good old 900p vs 1080p).And even then, that's theoretical peak performance. How that actually translates to real performance, taking into account the system's full architecture as a whole is another story which is why the 'discussions' surrounding TF here make me cringe so hard. Most people's knowledge stops and starts at the fact they know one number if higher than the other.
So yeah, definitely important, but nowhere near the most important metric to judge a system's performance.
It turn's out it wasn't best argument as we saw how multiplatform games run on consoles ;) Question is 20% difference in gpu performance can be as visible as 40% ? Don't think so.It's funny that I made that argument in 2013 already but nobody cared :)
If that's the case, can a mod sticky this somewhere?Teraflops is how many billions of floating point operations your GPU can handle a second. Each pixel on your screen, of which you might have ~8.3 Million, requires calculations to be displayed properly, and if you have complex shaders, you'll need it for that too. Now if you are running 60fps, you have ~500 Million pixels to shade a second. You have objects that need to be tracked to an exact location (ie to the fp32 decimal point). You have textures that need to know where to go and dozens of other important operations that require a floating point answer. The number Teraflop is an expression of how fast you can get the GPU to calculate these answers.
The big reason people down play it now is because of how bad AMD cards were at achieving their theoretical performance this generation. GCN cards often had much higher theoretical performance numbers than their Maxwell/Pascal Nvidia counterparts, but until vulkan (which was based on an AMD API, that gained traction via AMD's clout as the supplier for both home consoles for the western developer market), and continual improving their drivers, AMD always fell far behind in benchmarks. The AMD Vega 64 GPU for instance, has a 12TFLOPs performance number attached to it, but the 11TFLOPs GTX 1080 TI destroyed it, and for a lot of the generation, the 8TFLOPs GTX 1080 beat it easily.
Anyways, point is, Teraflops is important, it is a measure of how fast a GPU can go, but you can always hit bottlenecks and optimized code. There are other things important to a GPU as well, such as ROPs and Texture Units, but TFLOPs is overall, the most important measure of a GPU's performance, especially when comparing the same architectures.
I observed that the TF specification suddenly became less important at the very moment when the Playstation could not be No. 1 here. Coincidence?
That's because PS4/One was the first console generation to go past 1 TF 🤷♀️
Of course we had GFLOPS before but there always was a form of synthetic measurement such as triangles per second and whatnot.
It's a way to roughly compare GPU prowess of chips which are relatively similar generation-wise.
But that is a different talking point. TFs didn't lose or gain meaning in 7 years.It turn's out it wasn't best argument as we saw how multiplatform games run on consoles ;) Question is 20% difference in gpu performance can be as visible as 40% ? Don't think so.
Not just downplay, he accused Sony of lying lol.You're wrong. The PS4 gpu was widely understood to be "40% better" than the Xb1 gpu. In fact, it was such a prevalent topic of discussion that a Microsoft employee tried to downplay the TF difference.