In a continuous search for simplification of complex matters TFLOP/s is what has been used as unit of measure for performance in computing.teraflop
/ˈtɛrəflɒp/
noun
COMPUTING a unit of computing speed equal to one million million (1012) floating-point operations per second.
At the risk of losing my gamer cred, I can't quite figure this out. It seems to mostly relate to the processor, but then why not use the processor speed. I would understand that and that's worked every other generation.
I know what a motherboard is and how that effects thing's.
I know what ram is and I can understand a graphics card.
Basically I want to know what 1 teraflop is then I can quantity it?
Depending on your age you may remember its predecessors such a number of bits (e.g. PS1 is 32bits and N64 is 64bit), number of polygons per second (PS2 pushes 75 millions of polygons per second while GameCube pushes 20 millions) or the frequency of the processor (this Pentium 4 is at 1.7Ghz while this G4 is at 867Mhz).
Like the predecessors it has been used to compare to different pieces of hardware to determine which one is better/more performant and powerful... the more time goes on and the more we change these colloquial units of power the more they become reliable (e.g counting FLOP/s is more accurate than counting number of polygons and the latter is more accurate than counting the number of bits) but in the end you always arrive at a point where the equation that "bigger number means better" doesn't work anymore.
Once again we have arrived in an impasse where counting this unit of performance is not as reliable as we generally thought, it has always been this way as history tell us where in recent years the number of "FLOP/s" on an Nvidia card gave more performance than an equivalent from AMD/ATI because the architectures were substantially different.
Let's arrive at the actuality of things: You probably are asking this question because you have seen the PS5 having a considerable lower amount of TFLOP/s and some people are saying/might say this difference is not as marked as it seems because other stuff is faster while other say that there is a huge difference or maybe you are asking this question simply because you want to quantify this difference;
This number of TERAFLOP/s is given from a specific part of the graphic processor called ALU (arithmetic logic unit) and equals to one billion (TERA) floating point (FL) operations (OP) per second (/s), the graphic processor though is made by multiple parts some more useful than others depending on the context such has amount of cache, number of TMU (texture mapper units) or number of ROP (Raster Operation Pipeline), all these things are fundamental in producing good looking images and you can say that no one is more important than the other.
As what a TFLOP/s can do i cannot show you because it can't be showed in a simple way, sometimes it can be useful just like it can be useless as a unit, i could show you a machine that produces a terrible picture and one that gives a beautiful one while both of them truthfully have the same "teraflop/s power".
In the given context though it can give you a general and rough idea on how the two consoles compare to each other given the fact that they pretty much share the same exact architectures and are used for the same things although always going back to the fact that it's a simplification of complex matter it cannot be used as the definitive meter of paragon to say "that one is x amount better than the other".
With time it will be surely found a new unit (or units) of measure that will be more reliable and relatable with the times.