How relevant is the teraflop unit to real world graphical performance?

  • It's the most important unit of measure

    Votes: 55 9.5%
  • It's pretty important, depending on the task

    Votes: 285 49.1%
  • It's not all that relevant to real world performance

    Votes: 189 32.6%
  • It's not important at all

    Votes: 51 8.8%

  • Total voters
    580

Fatmanp

Member
Oct 27, 2017
4,446
Teraflops is how many billions of floating point operations your GPU can handle a second. Each pixel on your screen, of which you might have ~8.3 Million, requires calculations to be displayed properly, and if you have complex shaders, you'll need it for that too. Now if you are running 60fps, you have ~500 Million pixels to shade a second. You have objects that need to be tracked to an exact location (ie to the fp32 decimal point). You have textures that need to know where to go and dozens of other important operations that require a floating point answer. The number Teraflop is an expression of how fast you can get the GPU to calculate these answers.

The big reason people down play it now is because of how bad AMD cards were at achieving their theoretical performance this generation. GCN cards often had much higher theoretical performance numbers than their Maxwell/Pascal Nvidia counterparts, but until vulkan (which was based on an AMD API, that gained traction via AMD's clout as the supplier for both home consoles for the western developer market), and continual improving their drivers, AMD always fell far behind in benchmarks. The AMD Vega 64 GPU for instance, has a 12TFLOPs performance number attached to it, but the 11TFLOPs GTX 1080 TI destroyed it, and for a lot of the generation, the 8TFLOPs GTX 1080 beat it easily.

Anyways, point is, Teraflops is important, it is a measure of how fast a GPU can go, but you can always hit bottlenecks and optimized code. There are other things important to a GPU as well, such as ROPs and Texture Units, but TFLOPs is overall, the most important measure of a GPU's performance, especially when comparing the same architectures.

This is my take on it aswell. When comparing horsepower within the same architecture (which PS5 & XSX are) it is important. Early in the gen the 2tf difference is probably not going to matter much but once the second wave of games starts to hit and multiplatform devs get used to both consoles and their engines become more tuned for it them then it will probably start to show a difference.
 

DanteMenethil

Member
Oct 25, 2017
8,113
It's a metric that has been used since forever. I remember full well the flops discussions when I bought my hd4850 12 years ago. Not sure why now people think this is new.
 

Deleted member 224

Oct 25, 2017
5,629
Sure, but it's still not the be all end all metric.
It's pretty good when comparing the gpus inside both next gen machines (assuming that's why the thread was created).

The XSX has a roughly 20% stronger/better gpu than what's in the ps5.
 

Sean Mirrsen

Banned
May 9, 2018
1,159
It's a decent estimate of capability, especially if comparing products of similar make (and by same manufacturers). It's not the end-all-be-all, by far, especially where specialized processes exist (see: Ray-tracing/DLSS and the Tensor cores), but it'll let you have a decent understanding of how fast a given GPU or CPU can make stuff happen.
 

Deleted member 20297

User requested account closure
Banned
Oct 28, 2017
6,943
comparing computational power of chips from the same family, it's useful.
comparing different architectures it starts to muddy and real world performance benchmarks are more useful.
Of course, these things are *computers*. Everything they were invented for is to do math, fast. There is of course also data juggling and the memory pyramid everyone should know when talking about the topic but in the end, they are computing and the one who is faster in doing so...
But that doesn't mean that teraflops are especially representative of the computing power that a computer has. The funny thing is that the importance depends not on a technical level but what you think what your computer should be looked at when taking the whole package, thus narratives shifted from 2013 to 2020.
 

Deleted member 5491

User requested account closure
Banned
Oct 25, 2017
5,249
Well, how convenient the PS5 and the XSX have very similar architectures!
Only difference really is the memory configuration, but if they are fast enough it should matter that much.
Sure we can compare them once we have the games and footages an all of that. But no matter what, it won't be the be all end all, cause the
memorybandwith is different, as well as the mass media speed, which will have impact on processor time and RAM use
 

Shoichi

Member
Jan 10, 2018
10,630
It's important for performance especially with regards to resolution, fps, and graphical fidelity. No matter what companies pr departments try to sell.
More important is the price, and quality of the software.
 

Martinski

Member
Jan 15, 2019
8,443
Göteborg
It has mattered less so in the PC space though as someone mentioned, sometimes graphic cards like the GTX 1080 would outclass the Vega 64 cards even though it had way more TFLOPS. In most cases, as it seemed AMD could not take advantage of that extra computing power and Nvidia just had better architecture overall. So most games would be way better optimized for Nvidia cards for years.

AMD cards at least used to be better at blockchain and was used for bitcoin mining.
 

Shopolic

Avenger
Oct 27, 2017
7,049
Were you a member on Neogaf at that time? I joined in 2013, it was a huge point of discussion. In fact, that's where the famous Albert Penello quote originated from.
Here
That 30% difference he's discussing? He's talking about the TF difference.
Wasn't a member those days, but I was there 24/7!
I remember some discussions about teraflops, but it was much less than these days in my opinion. But maybe I'm wrong. The things I remember more than teraflops were resolutiongate, 8GB of GDDR5 and things like that.
 

Deleted member 224

Oct 25, 2017
5,629
Wasn't a member those days, but I was there 24/7!
I remember some discussions about teraflops, but it was much less than these days in my opinion. But maybe I'm wrong.
You're wrong. The PS4 gpu was widely understood to be "40% better" than the Xb1 gpu. In fact, it was such a prevalent topic of discussion that a Microsoft employee tried to downplay the TF difference.
 

Dan Thunder

Member
Nov 2, 2017
14,324
It is important but it's not the only measurement of what a machine can do. You could have a lower Teraflop figure but a high figure in other areas to help balance things out.

It's kind of weird how in the space of a less than two months people on both sides of the console debate have gone from 'TFlops are king' to 'They're not relevant'. It's not just the Sony side either. I've seen people stating the Series X is the greatest machine out there because it's 12tf then state that the potential 4tf of this rumoured Lockhart console is irrelevant because TFlops aren't an accurate measure of a console's power!
 

thematic

Fallen Guardian
Member
Oct 31, 2017
937
I remember sometime in the past we use "polygon". how much current Gen console can render polygon vs older Gen?
 

gremlinz1982

Member
Aug 11, 2018
5,332
It's a metric that has been used since forever. I remember full well the flops discussions when I bought my hd4850 12 years ago. Not sure why now people think this is new.
It was a metric used on the Dreamcast, PlayStation 2, Xbox, GameCube. We all know why it is now being downplayed.
GPU's since the start of this current generation, going into the mid generation refresh, and next generation are even more comparable because they all come from the same Radeon family.
 
Oct 25, 2017
3,065
It was an important metric when evaluating GPU performance. But console warriors have emptied it of meaning in the space of a few months.
 

Pottuvoi

Member
Oct 28, 2017
3,080
Can you inflate teraflops through software?
Inflate, no.
Deflate, absolutely.
It was an important metric when evaluating GPU performance. But console warriors have emptied it of meaning in the space of a few months.
Yup, it's important metric for maximum throughput of ALU.

Just like texturing units performance in texel rates for each texture type or ROP maximum pixel output.

Each is a hard limit which is rarely achieved and is good to know so one can consider how unoptimally work is done.(if something to be feasible needs 2x of maximum throughput, it's good to reevaluate the approach.)
 
Last edited:

Sean Mirrsen

Banned
May 9, 2018
1,159
Can you inflate teraflops through software?
Er, technically, yes? You can use simpler, less precise math. (Or numbers, rather.) It still requires the hardware to support that kind of thing, but that's exactly what stuff like FP16 (and whatever AMD calls their solution) is for, doubling the FLOPS at the cost of precision. There's also IOPS, which can work even faster by using integers rather than float-point numbers.
 

Nintendo

Prophet of Regret
Member
Oct 27, 2017
13,435
It's only important when the GPU's are the same architecture. Otherwise, it's meaningless.

Well, how convenient the PS5 and the XSX have very similar architectures!
Only difference really is the memory configuration, but if they are fast enough it should matter that much.

The difference in teraflops also shouldn't matter that match in that case. The gap is so small, the benefit would most likely be more stable framerate if anything. I don't think the difference would be enough bump the resolution from 1440p to native 4K while maintaining the same framerate like it does between X1X and PS4Pro. Though, I expect XSX to have better ray tracing performance due to more compute units.
 
Last edited:

Night Hunter

Member
Dec 5, 2017
2,816
Good to see that the right poll option is leading. Flops may not the be all, end all hardware metric, but depending on the performed task they're a pretty good number to have, especially with two such comparable machines (architecture wise). But with things like dynamic resolution and reconstruction techniques I don't really know if the difference is gonna be all that visible with such a small gap in performance.
 

Max|Payne

Member
Oct 27, 2017
9,085
Portugal
Inflate, no.
Deflate, absolutely.
Er, technically, yes? You can use simpler, less precise math. (Or numbers, rather.) It still requires the hardware to support that kind of thing, but that's exactly what stuff like FP16 (and whatever AMD calls their solution) is for, doubling the FLOPS at the cost of precision. There's also IOPS, which can work even faster by using integers rather than float-point numbers.
Lol, I was just referring to how some cheap brand digital cameras upscale photos in software to give the impression they have more resolution than their actual sensor has but in the end I actually learned something.
 
Oct 27, 2017
2,551
I honestly don't know, but Steve from GamersNexus (excellent PC hardware centric gaming Youtube channel) paid it almost no mind when he made a video after Cerny's PS5 tech talk.
 

HBK

Member
Oct 30, 2017
8,076
It's a comparison metric that's imperfect as different architectures have different efficiency bottlenecks, but as with synthetic metrics, it has the value of being relatively context-independent. It's a rough assessment of the computing prowess.
 

Tomo815

Banned
Jul 19, 2019
1,534
I guess they are important for some when you need to quantify the performance of a graphic card.

What is strange for me is that in practice Nintendo make games for console that has like a quarter of a teraflop but somehow manages to make them run at 60/1080 and look good at the same time. So what are all those teraflops used for in the other consoles?
 

Sei

Member
Oct 28, 2017
5,833
LA
It's all about bottlenecks, you can have a lot of data but then your lanes are limited and you won't get the performance you expect.
 

MatrixMan.exe

Member
Oct 25, 2017
9,509
They're important as a raw hardware performance metric

And even then, that's theoretical peak performance. How that actually translates to real performance, taking into account the system's full architecture as a whole is another story which is why the 'discussions' surrounding TF here make me cringe so hard. Most people's knowledge stops and starts at the fact they know one number if higher than the other.

So yeah, definitely important, but nowhere near the most important metric to judge a system's performance.
 

HBK

Member
Oct 30, 2017
8,076
I don't know if it's that important or not, but I remember there weren't much discussions about teraflops in consoles like this before.
That's because PS4/One was the first console generation to go past 1 TF 🤷‍♀️

Of course we had GFLOPS before but there always was a form of synthetic measurement such as triangles per second and whatnot.

It's a way to roughly compare GPU prowess of chips which are relatively similar generation-wise.
 

Deleted member 10737

User requested account closure
Banned
Oct 27, 2017
49,774
I observed that the TF specification suddenly became less important at the very moment when the Playstation could not be No. 1 here. Coincidence?
yeah basically. there was so much talk and speculation about it, like 10+ threads each 400 pages, but then after the specs for both systems were revealed it became meaningless and not very important.
 

Deleted member 20297

User requested account closure
Banned
Oct 28, 2017
6,943
And even then, that's theoretical peak performance. How that actually translates to real performance, taking into account the system's full architecture as a whole is another story which is why the 'discussions' surrounding TF here make me cringe so hard. Most people's knowledge stops and starts at the fact they know one number if higher than the other.

So yeah, definitely important, but nowhere near the most important metric to judge a system's performance.
It's funny that I made that argument in 2013 already but nobody cared :)
 

Nintendo

Prophet of Regret
Member
Oct 27, 2017
13,435
I guess they are important for some when you need to quantify the performance of a graphic card.

What is strange for me is that in practice Nintendo make games for console that has like a quarter of a teraflop but somehow manages to make them run at 60/1080 and look good at the same time. So what are all those teraflops used for in the other consoles?

All those teraflops? The PS4 only has 1.8 and the Switch isn't a quarter of teraflop lol I think it's 1 tflop. The architecture is very different anyway. Resolution and framerate aren't everything. Don't tell me you don't see the difference in fidelity between PS4 and Switch games.
 

HBK

Member
Oct 30, 2017
8,076
And even then, that's theoretical peak performance. How that actually translates to real performance, taking into account the system's full architecture as a whole is another story which is why the 'discussions' surrounding TF here make me cringe so hard. Most people's knowledge stops and starts at the fact they know one number if higher than the other.

So yeah, definitely important, but nowhere near the most important metric to judge a system's performance.
PS4 had 50% more TF than One and was on average able to churn out about 50% more pixels (the good old 900p vs 1080p).

There's little reason to believe similar figures won't be observed in the near future.
 

Deleted member 49611

Nov 14, 2018
5,052
to me it's not important. first of all, i don't buy consoles looking for performance so it does nothing for me.
 

CQC

Member
Oct 25, 2017
1,716
Teraflops is how many billions of floating point operations your GPU can handle a second. Each pixel on your screen, of which you might have ~8.3 Million, requires calculations to be displayed properly, and if you have complex shaders, you'll need it for that too. Now if you are running 60fps, you have ~500 Million pixels to shade a second. You have objects that need to be tracked to an exact location (ie to the fp32 decimal point). You have textures that need to know where to go and dozens of other important operations that require a floating point answer. The number Teraflop is an expression of how fast you can get the GPU to calculate these answers.

The big reason people down play it now is because of how bad AMD cards were at achieving their theoretical performance this generation. GCN cards often had much higher theoretical performance numbers than their Maxwell/Pascal Nvidia counterparts, but until vulkan (which was based on an AMD API, that gained traction via AMD's clout as the supplier for both home consoles for the western developer market), and continual improving their drivers, AMD always fell far behind in benchmarks. The AMD Vega 64 GPU for instance, has a 12TFLOPs performance number attached to it, but the 11TFLOPs GTX 1080 TI destroyed it, and for a lot of the generation, the 8TFLOPs GTX 1080 beat it easily.

Anyways, point is, Teraflops is important, it is a measure of how fast a GPU can go, but you can always hit bottlenecks and optimized code. There are other things important to a GPU as well, such as ROPs and Texture Units, but TFLOPs is overall, the most important measure of a GPU's performance, especially when comparing the same architectures.
If that's the case, can a mod sticky this somewhere?

please.
 

Nintendo

Prophet of Regret
Member
Oct 27, 2017
13,435
I observed that the TF specification suddenly became less important at the very moment when the Playstation could not be No. 1 here. Coincidence?

IIRC, the PS4 was ~40% more than X1 and now XSX is ~17% more than PS5, so the difference is inherently less important.
 

Martinski

Member
Jan 15, 2019
8,443
Göteborg
That's because PS4/One was the first console generation to go past 1 TF 🤷‍♀️

Of course we had GFLOPS before but there always was a form of synthetic measurement such as triangles per second and whatnot.

It's a way to roughly compare GPU prowess of chips which are relatively similar generation-wise.

Yes flops have indeed been used for a while when comparing computing power between platforms. And indeed the reason why Teraflops havent been used is because Tera, it used to be Giga, Mega etc before. The 8th gen is the first gen to break into teraflops.

But it used to be more common to see articles how many polygons and triangles a second PS2 , Xbox etc could churn ut a second. That has become irrelevant now.
 

LiquidSolid

Member
Oct 26, 2017
4,731
You're wrong. The PS4 gpu was widely understood to be "40% better" than the Xb1 gpu. In fact, it was such a prevalent topic of discussion that a Microsoft employee tried to downplay the TF difference.
Not just downplay, he accused Sony of lying lol.

But when people bring up the "40% better" thing as if it was the biggest talking point at the start of this gen, they're deliberately ignoring the context at the time (namely that that 40% weaker console was more expensive and had awful DRM up until a few months before launch) as well as how different this gen is to next.

Anyway, TFLOPS are an important metric for GPUs with similar architectures but not the be-all-end-all of the entire console, especially next gen where the biggest upgrades over this gen will be the CPU and SSD. The SX's advantage TFLOP advantage will be noticeable but it's hard to say how much since the gap between the two next gen consoles is much smaller than the ones between the PS4/XB1 and Pro/X and the whole diminishing returns thing.