• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.
  • We have made minor adjustments to how the search bar works on ResetEra. You can read about the changes here.

Lady Gaia

Member
Oct 27, 2017
2,479
Seattle
Am I out of line to think that as the next gen goes along that compression techniques and methods might improve so they can achieve or closely achieve those high peak figures more often?

It's unlikely that we'll see huge strides in lossless compression. Just a couple of percent more would be considered pretty revolutionary. Kraken is absurdly good already, and it sounds like it's largely baked into the hardware design, so that's what it's likely to remain as. The sorts of things that compress really well are those that have large blocks of repeating data. Some textures that are mostly transparent or have a really simplistic pattern on them might fit the bill but not much else would. Still, it's great to know that when that kind of data is present it's going to compress well and load fast.

I would simply count on 8-9GB/s. That's already enough to completely replace the entire contents of RAM in less than two seconds, which is pretty amazing.
 

BreakAtmo

Member
Nov 12, 2017
12,838
Australia
It's unlikely that we'll see huge strides in lossless compression. Just a couple of percent more would be considered pretty revolutionary. Kraken is absurdly good already, and it sounds like it's largely baked into the hardware design, so that's what it's likely to remain as. The sorts of things that compress really well are those that have large blocks of repeating data. Some textures that are mostly transparent or have a really simplistic pattern on them might fit the bill but not much else would. Still, it's great to know that when that kind of data is present it's going to compress well and load fast.

I would simply count on 8-9GB/s. That's already enough to completely replace the entire contents of RAM in less than two seconds, which is pretty amazing.

Is it likely that cel-shaded cartoon-style games could get even smaller? All those textures that are just flat single colours...
 
Feb 23, 2019
1,426
i am fine with 10 tflops as well, i just dont think the performance drop is worth the cost. we dont know how much those chips are costing them, but i really dont see a 40mm2 bigger die costing more than $15-30. it bugs me that sony designed the console this way.

that said, 2.23 ghz is indeed way above any of our expectations. we did originally dismiss the 2.0 ghz 9.2 ghz console, but 10.3 tflops at 2.23 ghz is not something i was expecting and im still struggling to wrap my head around it. subconciously, i still see this as a 9.2 tflops console because 2.23 ghz seems so surreal. not sure if this is making sense, but thats where my mindset is at the moment. it seems too good to be true.

Worth the cost of what? A marketing number?

I think you're underestimating the die size difference. I suspect that Sony would be able to produce 4 PS5s for every 3 XSX. Sony needs more volume than MS because they will have higher demand for their consoles. However, if Lockhart truly is a thing, and has 20CUs, that does throw an interesting wrench into the situation since Microsoft could conceivably build a ton of consoles. But it also means that Microsoft has chosen to sabatoge their 12TF console with Lockhart. Which suggests to me that the XSX is more about winning a marketing bulletpoint battle than actually delivering top tier, cutting edge visuals and Sony simply weren't willing to make that sacrifice so they stuck with a powerful 1 SKU strategy that is a proven winner.

Also, these consoles are essentially limited by the constraints of the power envelope. Cerny may have decided to reach that power ceiling with a smaller die clocked as fast as possible, knowing that while the theoretical TF wouldn't match a wide and slow approach, it would have many more benefits due to the increased clockspeed, and having it clocked higher and getting better utilization with much faster clockspeeds made more sense with the way they architected the system.

Further, the difference in theoretical performance is the smallest of any console generation in history. 17% is not going to amount to a discernible difference. A 120% SSD performance advantage will. It's the biggest difference between the two consoles and I suspect that Sony made the wise decision to emphasize that even if it ended up sacrificing certain parts of their design.
 

McScroggz

The Fallen
Jan 11, 2018
5,973
Man it's gotten to the point in all of the tech threads where I don't know if I should listen to anybody. It seems like every time somebody types out a comment or posts a video that seems to be a good breakdown an onslaught of people breakdown how their are wrong. It's frustrating because I'm excited for the next generation of consoles and I'm interested in understanding what new and cool things these consoles are doing but there is such a sharp divide on pretty much everything I can't enjoy the conversation. It sucks.

About the only thing I'm confident in is both consoles are going to be dope. I just wish I knew enough to be able to spot most of the BS because I've probably already gotten misinformed about things without even knowing it.

But who knows. I Just hate how antagonistic it is right now. I don't get it tbh.
 

RedHeat

Member
Oct 25, 2017
12,690
Man it's gotten to the point in all of the tech threads where I don't know if I should listen to anybody. It seems like every time somebody types out a comment or posts a video that seems to be a good breakdown an onslaught of people breakdown how their are wrong. It's frustrating because I'm excited for the next generation of consoles and I'm interested in understanding what new and cool things these consoles are doing but there is such a sharp divide on pretty much everything I can't enjoy the conversation. It sucks.

About the only thing I'm confident in is both consoles are going to be dope. I just wish I knew enough to be able to spot most of the BS because I've probably already gotten misinformed about things without even knowing it.

But who knows. I Just hate how antagonistic it is right now. I don't get it tbh.
The best thing for your sanity is to ignore the console warriors and pick at least one source, then pop into the thread to see how many people corroborate it. So far I'm sticking with DF
 

Deleted member 224

Oct 25, 2017
5,629
Man it's gotten to the point in all of the tech threads where I don't know if I should listen to anybody. It seems like every time somebody types out a comment or posts a video that seems to be a good breakdown an onslaught of people breakdown how their are wrong. It's frustrating because I'm excited for the next generation of consoles and I'm interested in understanding what new and cool things these consoles are doing but there is such a sharp divide on pretty much everything I can't enjoy the conversation. It sucks.

About the only thing I'm confident in is both consoles are going to be dope. I just wish I knew enough to be able to spot most of the BS because I've probably already gotten misinformed about things without even knowing it.

But who knows. I Just hate how antagonistic it is right now. I don't get it tbh.
Ignore it all until we get actual console vs. console (on an individual game basis) comparisons from Digital Foundry and others later in the year.
 

McScroggz

The Fallen
Jan 11, 2018
5,973
The best thing for your sanity is to ignore the console warriors and pick at least one source, then pop into the thread to see how many people corroborate it. So far I'm sticking with DF

Yeah Digital Foundry is great. My favorite video of theirs is when they did a retrospective on the PS3 reveal, so much interesting insight. The NX Gamer seems good too, but I've seen a fair bit of skepticism towards some of the things he said so I don't know what to do. I want to get caught up in the hype for new tech and instead I'm getting caught up in the console war, lol.

Ignore it all until we get actual console vs. console (on an individual game basis) comparisons from Digital Foundry and others later in the year.

That's a long time to not read and discuss these new consoles. Hopefully I won't have to.
 

vivftp

Member
Oct 29, 2017
19,764
Does anyone think lockheart will use binned xsx apu parts with deactivated CU's?

I believe Albert Panello addressed that once saying it would not be a sound model to just rely on binned chips for the cheaper SKU. Plus it's likely the cheaper SKU would outsell the higher one, so that strategy doesn't work. If there is a Lockhart it will be purpose built to suit its lower price
 

Md Ray

Member
Oct 29, 2017
750
Chennai, India
the PS5 will not be at 2080 Level.

Why not? PS4 now outperforms GTX 760 which has quite a considerable mem b/w and computational power advantage over PS4 on paper.

GTX 760: 2.38 TF, 192 GB/s (fully dedicated to the GPU)
PS4 GPU: 1.84 TF, 176 GB/s (split between CPU/GPU)

RTX 2080: 10.6 TF, 448 GB/s (fully dedicated to the GPU)
PS5 GPU: 10.3 TF, 448 GB/s (split between CPU/GPU)

The gap between GTX 2080 (non-S) vs PS5 GPU is even smaller than GTX 760 vs PS4 GPU. So why wouldn't PS5 be on the same level?

Dynamic RS: Off
1080p.png


Dynamic RS: On
DRS.png


You have PS4 utilizing PC's Low, Med, as well as High setting with DRS enabled at 1080p sticking closely to 60fps. And then you have this that can't even manage 30fps at Lowest settings, 1080p with DRS set to On using Vulkan API. I wouldn't be surprised if PS5 actually outperformed RTX 2080 down the line like PS4 is beating its Nvidia equivalent from the same era.
 

Deleted member 20297

User requested account closure
Banned
Oct 28, 2017
6,943
Why not? PS4 now outperforms GTX 760 which has quite a considerable mem b/w and computational power advantage over PS4 on paper.

GTX 760: 2.38 TF, 192 GB/s (fully dedicated to the GPU)
PS4 GPU: 1.84 TF, 176 GB/s (split between CPU/GPU)

RTX 2080: 10.6 TF, 448 GB/s (fully dedicated to the GPU)
PS5 GPU: 10.3 TF, 448 GB/s (split between CPU/GPU)

The gap between GTX 2080 (non-S) vs PS5 GPU is even smaller than GTX 760 vs PS4 GPU. So why wouldn't PS5 be on the same level?

Dynamic RS: Off
1080p.png


Dynamic RS: On
DRS.png


You have PS4 utilizing PC's Low, Med, as well as High setting with DRS enabled at 1080p sticking closely to 60fps. And then you have this that can't even manage 30fps at Lowest settings, 1080p with DRS set to On using Vulkan API. I wouldn't be surprised if PS5 actually outperformed RTX 2080 down the line like PS4 is beating its Nvidia equivalent from the same era.
Dictator already left.
 

Deleted member 61469

Attempted to circumvent ban with alt account
Banned
Nov 17, 2019
1,587
Why not? PS4 now outperforms GTX 760 which has quite a considerable mem b/w and computational power advantage over PS4 on paper.

GTX 760: 2.38 TF, 192 GB/s (fully dedicated to the GPU)
PS4 GPU: 1.84 TF, 176 GB/s (split between CPU/GPU)

RTX 2080: 10.6 TF, 448 GB/s (fully dedicated to the GPU)
PS5 GPU: 10.3 TF, 448 GB/s (split between CPU/GPU)

The gap between GTX 2080 (non-S) vs PS5 GPU is even smaller than GTX 760 vs PS4 GPU. So why wouldn't PS5 be on the same level?

Dynamic RS: Off
1080p.png


Dynamic RS: On
DRS.png


You have PS4 utilizing PC's Low, Med, as well as High setting with DRS enabled at 1080p sticking closely to 60fps. And then you have this that can't even manage 30fps at Lowest settings, 1080p with DRS set to On using Vulkan API. I wouldn't be surprised if PS5 actually outperformed RTX 2080 down the line like PS4 is beating its Nvidia equivalent from the same era.

Do you have more examples or did you just cherrypick one game?
 

Straffaren666

Member
Mar 13, 2018
84
Why not? PS4 now outperforms GTX 760 which has quite a considerable mem b/w and computational power advantage over PS4 on paper.

GTX 760: 2.38 TF, 192 GB/s (fully dedicated to the GPU)
PS4 GPU: 1.84 TF, 176 GB/s (split between CPU/GPU)

RTX 2080: 10.6 TF, 448 GB/s (fully dedicated to the GPU)
PS5 GPU: 10.3 TF, 448 GB/s (split between CPU/GPU)

The gap between GTX 2080 (non-S) vs PS5 GPU is even smaller than GTX 760 vs PS4 GPU. So why wouldn't PS5 be on the same level?

Dynamic RS: Off
1080p.png


Dynamic RS: On
DRS.png


You have PS4 utilizing PC's Low, Med, as well as High setting with DRS enabled at 1080p sticking closely to 60fps. And then you have this that can't even manage 30fps at Lowest settings, 1080p with DRS set to On using Vulkan API. I wouldn't be surprised if PS5 actually outperformed RTX 2080 down the line like PS4 is beating its Nvidia equivalent from the same era.

I believe he's using the performance of Gears 5 on the XSX, which when shown for DF was running at a comparable level to a 2080, when he thinks the performance of the PS5 will be lower than for the 2080. However, based on that RDNA (1) and Turing have a similar performance/TF (when comparing the actual game clock) and that PS5 and 2080 both are ~10.3 TF, it's reasonable to believe the PS5 will be at a similar performance level to a 2080. The BW available to the PS5 GPU will be lower than for a 2080, since it'll be shared with the CPU/SSD/Audio, but there will probably be a 5-10% IPC improvement for RDNA 2 which should balance out the BW deficit.
 
Last edited:
Oct 25, 2017
4,427
Silicon Valley
Why didn't Sony go for 16Gbps GDDR6 chips instead of 14? Price? Availability?
Your username makes the potential answer to this pretty funny.

EDIT - I'll clarify, lol. I believe the choice is based around the speed requirements they sought after for the system.

Of course, I can't break it down like others, but a bunch of my friends at various studios were pretty hyped by the PS5's architectural speed so, probably important.
 
Last edited:

Md Ray

Member
Oct 29, 2017
750
Chennai, India
Do you have more examples or did you just cherrypick one game?
Resident Evil 2 Remake, COD: MW, Gears 5 also run worse than consoles on GTX 760 at low settings, 1080p let alone console equivalent settings/resolution. Look it up. And I'm sure there are other recent games besides them that perform worse. DOOM Eternal is the best-case scenario as it happens to be one of the best-optimized games of this generation.
 
Last edited:

Nightengale

Member
Oct 26, 2017
5,708
Malaysia
Personally, I've chosen to take the stance of taking what *everyone* is saying about respective secret-sauces with a grain of salt and wait till we see how all those different elements play out in an aggregate of products.

It's not a matter of believing or disbelieving, like - I trust when developers say how PS5 is designed with the SSD, I/O stuff is great and all that. And raw specs don't lie either.

But when it comes to things that are beyond the spec sheet, like bottlenecks, potential utility of SSD in terms of LoD/etc, how all those things aggregate themselves in a final product, it's all pretty speculative. since it's not like they are giving me extremely minutia details of why it's so special and how it would translate to X/Y/Z on their respective games vs a different platform. And even folks who are going deep in the minuatia, are working off spec sheets and educated guesses based off their respective knowledge and expertise in those topics, but it's also a fact that PS5 isn't out , we've not seen comparable tech demos, and we've not seen any developer deep dive on how PS5's SSD allowed them to do X/Y/Z better than *insert platform*
 
Nov 2, 2017
2,275
Why not? PS4 now outperforms GTX 760 which has quite a considerable mem b/w and computational power advantage over PS4 on paper.

GTX 760: 2.38 TF, 192 GB/s (fully dedicated to the GPU)
PS4 GPU: 1.84 TF, 176 GB/s (split between CPU/GPU)

RTX 2080: 10.6 TF, 448 GB/s (fully dedicated to the GPU)
PS5 GPU: 10.3 TF, 448 GB/s (split between CPU/GPU)

The gap between GTX 2080 (non-S) vs PS5 GPU is even smaller than GTX 760 vs PS4 GPU. So why wouldn't PS5 be on the same level?

Dynamic RS: Off
1080p.png


Dynamic RS: On
DRS.png


You have PS4 utilizing PC's Low, Med, as well as High setting with DRS enabled at 1080p sticking closely to 60fps. And then you have this that can't even manage 30fps at Lowest settings, 1080p with DRS set to On using Vulkan API. I wouldn't be surprised if PS5 actually outperformed RTX 2080 down the line like PS4 is beating its Nvidia equivalent from the same era.
Kepler aged terribly though as it wasn't similar to GCN at all so when engines started taking advantage of GCN Kepler fell behind. Lack of Async Compute is just one example. Each gen since Kepler Nvidia has improved their architecture to bridge the gap between them in current engines. Your own graphs show this as the 950 (1.8 Tflops) matches a 780Ti (5 Tflops) in Doom. Also Doom is truly the most extreme case you can make as it's probably the engine Kepler does the worst in.

Kepler was just very ill suited for how engines evolved as a results of consoles usuing GCN. Turing does not have this weakness as it has all forward thinking features the next gen consoles will have so it won't age like Kepler at all. As a result your 760 vs PS4's GPU example isn't really similar to how the 2080 will match up with PS5's GPU in the future.
 

cooldawn

Member
Oct 28, 2017
2,450
I'm salivating at the prospect of a RT GT game. I'd be happy with GT Sport with a massive visual improvements patch. There's plenty of cars and tracks, give me the visual flair. Racing games already push visuals and at high frame rates, RT lighting and reflections will absolutely push them into true photorealism territory, GT and Forza alike. Going to be insanely cool.

And for GT at least in VR too! Given that the original PSVR headset is getting supported and only has 1080p screen I would expect the full visual suite and RT in PSVR in all game modes on PS5 . Ooft.
RT is cool and all but it's not my most looking forward too technology for the next-generation, or rather I'm not convinced by what I have seen so far. I guess that's why I'm most excited about Polyphony Digital implementation, to really showcase the technology. A game like Gran Turismo would benefit from Global Illumination and reflections.

Maybe Gran Turismo can return to the heady days of Gran Turismo 5 when the user can determine all the dymanics (time of day and weather) and re-introduce a more resilient version of procedural deformation (licence agreements withstanding). It would be nice when the weather turns to create a sense of atmosphere where the sun burns through the early morning mist while reflections glint on wet objects around the course. I'm sure whatever system Poyphony Digital end up using it'll be amazing to see.

I hope their RT doesn't turn out to be too clinical though. I can't stand completely clean highly reflective surfaces, they always seem a little too fake. Most of the time nature is rough, not entirely mirror smooth. For this reason, overall, RT reflections have yet to impress me.

And racing games are typically not very demanding to render, so there's more possibility to spend budget on RT without impacting the rest of the presentation.
Hmmm...I always consider Polyphony Digital to be a benchmark studio, moreso than Naughty Dog or Guerrilla Games. I like their perceived precision, materials, lighting and HDR. I can't think of a developers with so much attention to detail relative to system capabilities.

Saying that The Last Of Us Part II is still incoming.

Neither Dictator , nor anyone else, is solely explaining the data for what it is, or has a totally detached perspective. Everyone applies interpretation to the information they process and present. Mr. Battaglia has long experience and a breadth of background knowledge that makes his views deserve careful consideration, and also makes them more plausible. But that doesn't mean his takes are flat descriptions of the truth. For example, he believed just a few months ago that raytracing would necessarily be limited on the next-gen consoles, and full path tracing out of reach. He was proved wrong (at least for XSX).

This isn't a criticism of him! Synthesizing all the data in a complex field is a hard task. Mr. Battaglia readily admitted his erroneous prediction, which is actually the best recommendation for listening to what he says in the future. The willingness to update your own thinking in the face of fact is what typifies intelligent discussion. That's why there's people on the forum who are trustworthy, even if they preferentially support one platform or other.
Right, people need to read this.

Considering I watch all of Digital Foundry's work (and NXGamrs') I do think Dictator doesn't believe consoles can really push beyond the PC space.

On a differnt note...

I'm a tad concerned the way things are here at the moment. One would have thought the release of specifications for both PlayStation 5 and XBOX Series X would have answered questions and calmed things down. Evidently it's done neither. What it has done is make this place even more like sandpaper.

XBOX Series X has more TF's. PlayStation has a more exotic design. It's not actually surprising (no matter how people wanted things to turn out). Microsoft from the outset wanted more power. Historically Sony have skewed towards innovative design. The result was right under our noses all this time.

Things aren't helped by the muse of developers scraping the walls and being at odds with commentators. It's a clusterfuck.

Why is 3rd party a barometer when it'll be first party that'll pump these things to the max? When Microsoft and Sony release there first full next-gen exclusive title...that's when we'll know what each can do to start, not when some third party decide to just get things working.

I'm looking forward to a generation where I can see improvements year-on-year. PlayStation 3 had an amazing ability to retain a momentum of improvements as each game passed. It was amazing to see just how far it came from year one right up to The Last Of Us. It feels like PlayStation 5 will offer the same sense of wonderment year-on-year.

With Sony's ability to curate amazing IP's, that's exciting to me. With the wind up their sails Microsoft has an unbelievable opportunity to have a massive impact next-generation.

No matter which way we turn...things are looking amazing right now.
 

GhostTrick

Member
Oct 25, 2017
11,316
Why not? PS4 now outperforms GTX 760 which has quite a considerable mem b/w and computational power advantage over PS4 on paper.

GTX 760: 2.38 TF, 192 GB/s (fully dedicated to the GPU)
PS4 GPU: 1.84 TF, 176 GB/s (split between CPU/GPU)

RTX 2080: 10.6 TF, 448 GB/s (fully dedicated to the GPU)
PS5 GPU: 10.3 TF, 448 GB/s (split between CPU/GPU)

The gap between GTX 2080 (non-S) vs PS5 GPU is even smaller than GTX 760 vs PS4 GPU. So why wouldn't PS5 be on the same level?

Dynamic RS: Off
1080p.png


Dynamic RS: On
DRS.png


You have PS4 utilizing PC's Low, Med, as well as High setting with DRS enabled at 1080p sticking closely to 60fps. And then you have this that can't even manage 30fps at Lowest settings, 1080p with DRS set to On using Vulkan API. I wouldn't be surprised if PS5 actually outperformed RTX 2080 down the line like PS4 is beating its Nvidia equivalent from the same era.


Because you're comparing Orange and Apples here.

First: GTX 760's Kepler has nothing to do with GCN's architecture. GTX 760's Kepler architecture has nothing to do with RTX 2080's Turing architecture, which has nothing to do with RDNA 2 architecture. You cant tell which architecture will age better. If anything, according ot the bench you posted, R7 260x which is slower than 780 Ti is getting higher framerates.

Second: RTX 2080 is higher than 10.2Tflops. In practice, it's over 11.2Tflops. Because boost clocks marketed by Nvidia are more conservative than the ones you get in real time cases.
 

Straffaren666

Member
Mar 13, 2018
84
Because you're comparing Orange and Apples here.

First: GTX 760's Kepler has nothing to do with GCN's architecture. GTX 760's Kepler architecture has nothing to do with RTX 2080's Turing architecture, which has nothing to do with RDNA 2 architecture. You cant tell which architecture will age better. If anything, according ot the bench you posted, R7 260x which is slower than 780 Ti is getting higher framerates.

Second: RTX 2080 is higher than 10.2Tflops. In practice, it's over 11.2Tflops. Because boost clocks marketed by Nvidia are more conservative than the ones you get in real time cases.

I don't own a 2080 but I seriously doubt it's running over the advertised boost clock, unless it's being overclocked. gpu.userbenchmark.com reports a 18% performance lead for the 2080 compared to the 5700 XT. The GPU of the PS5 will be running at ~23% higher clock frequency than the game clock of a 5700 XT. I find it unlikely that the clock frequency increase and architectural improvements of RDNA 2 wouldn't, at least, make the PS5 perform on a comparable level to a 2080. An overclocked 2080 isn't really relevant in this discussion.
 

GhostTrick

Member
Oct 25, 2017
11,316
I don't own a 2080 but I seriously doubt it's running over the advertised boost clock, unless it's being overclocked. gpu.userbenchmark.com reports a 18% performance lead for the 2080 compared to the 5700 XT. The GPU of the PS5 will be running at ~23% higher clock frequency than the game clock of a 5700 XT. I find it unlikely that the clock frequency increase and architectural improvements of RDNA 2 wouldn't, at least, make the PS5 perform on a comparable level to a 2080. An overclocked 2080 isn't really relevant in this discussion.


That's how Nvidia boost clocks work: they get as high as they can.
1800mhz for the RTX 2080 FE:
www.techpowerup.com

NVIDIA GeForce RTX 2080 Founders Edition 8 GB Review

It was very bold of NVIDIA to debut its flagship implementation of the Turing architecture right next to the RTX 2080, poised to be the poster-boy of this architecture. This card packs the promise of real-time ray tracing, of sorts. NVIDIA also put out its best cooler design since TITAN. All...
And yet during gameplay it goes as high as 2000mhz and average around 1900mhz.
And it can go higher for models with a decent fan.
 

Straffaren666

Member
Mar 13, 2018
84
That's how Nvidia boost clocks work: they get as high as they can.
1800mhz for the RTX 2080 FE:
www.techpowerup.com

NVIDIA GeForce RTX 2080 Founders Edition 8 GB Review

It was very bold of NVIDIA to debut its flagship implementation of the Turing architecture right next to the RTX 2080, poised to be the poster-boy of this architecture. This card packs the promise of real-time ray tracing, of sorts. NVIDIA also put out its best cooler design since TITAN. All...
And yet during gameplay it goes as high as 2000mhz and average around 1900mhz.
And it can go higher for models with a decent fan.

The link you provided is for the Founders Edition. I don't believe it's representative for the 2080. Do you have any links that confirms the 2080 averages above the boost clock for the stock 2080?

Never the less, the benchmark indicates there is a 18% performance difference between the 2080 and 5700 XT, regardless of the actual clock frequency of the 2080. The GPU of the PS5 should be more than 18% faster than a 5700 XT.
 

GhostTrick

Member
Oct 25, 2017
11,316
The link you provided is for the Founders Edition. I don't believe it's representative for the 2080. Do you have any links that confirms the 2080 averages above the boost clock for the stock 2080?

Never the less, the benchmark indicates there is a 18% performance difference between the 2080 and 5700 XT, regardless of the actual clock frequency of the 2080. The GPU of the PS5 should be more than 18% faster than a 5700 XT.


Why does it matter ?
The FE is rated at 1800mhz compared to the 1710mhz of the normal one. It's the same for most newer (at least Pascal) GPUs. They run over their advertised boostclocks.
It's definitely representative of the 2080 or any other Nvidia GPU.
Another exemple:
www.techpowerup.com

Gigabyte GeForce RTX 2080 Gaming OC 8 GB Review

Gigabyte's RTX 2080 Gaming OC is a customized, overclocked variant of the RTX 2080. It runs at a boost clock of 1815 MHz, which is 15 MHz higher than the Founders Edition. Gigabyte has also released a new BIOS that increases the card's power limit. We ran all our tests with the stock and new BIOS.
Rated at 1815mhz, actually runs up to 2000mhz. Average around 1900mhz.
www.techpowerup.com

MSI GeForce RTX 2080 Gaming X Trio 8 GB Review

The premium value of MSI's Gaming X brand is reflected in the amount of design and the premium components that went into building the RTX 2080 Gaming X Trio; it has also been given a healthy factory overclock to help it in its contention to be the fastest sub-$1000 graphics card.
Rated at 1860mhz, actually runs up to 2025mhz, average around 1950mhz.
 

Straffaren666

Member
Mar 13, 2018
84
Why does it matter ?
The FE is rated at 1800mhz compared to the 1710mhz of the normal one. It's the same for most newer (at least Pascal) GPUs. They run over their advertised boostclocks.
It's definitely representative of the 2080 or any other Nvidia GPU.
Another exemple:
www.techpowerup.com

Gigabyte GeForce RTX 2080 Gaming OC 8 GB Review

Gigabyte's RTX 2080 Gaming OC is a customized, overclocked variant of the RTX 2080. It runs at a boost clock of 1815 MHz, which is 15 MHz higher than the Founders Edition. Gigabyte has also released a new BIOS that increases the card's power limit. We ran all our tests with the stock and new BIOS.
Rated at 1815mhz, actually runs up to 2000mhz. Average around 1900mhz.
www.techpowerup.com

MSI GeForce RTX 2080 Gaming X Trio 8 GB Review

The premium value of MSI's Gaming X brand is reflected in the amount of design and the premium components that went into building the RTX 2080 Gaming X Trio; it has also been given a healthy factory overclock to help it in its contention to be the fastest sub-$1000 graphics card.
Rated at 1860mhz, actually runs up to 2025mhz, average around 1950mhz.

No, the FE is not representative for the 2080, just as the 50th anniversary edition isn't representative for the 5700 XT. Besides we're talking about the real world average clock frequency. None of the links you provide tell us that. You can spin it any way you want, but the benchmark indicates a 18% performance lead for the 2080 over the 5700 XT, which is what matters.
 

Pheonix

Banned
Dec 14, 2018
5,990
St Kitts
Considering I watch all of Digital Foundry's work (and NXGamrs') I do think Dictator doesn't believe consoles can really push beyond the PC space.
I have never quite understood that sentiment. Or the basis behind it... I thikits generally short sighted of anyone having it.

Like PCs, consoles are also just hardware. And there is a totally quantifiable performance characteristic or level that can be measured between the two.

Eg. The upcoming next-gen consoles may very well end falling in line with what would be a mid-range PC. Or somewhere in between a mid-range and high-end PC because the software can be better optimized for consoles. In 2 years time, these upcoming next-gen consoles become equivalent to a low end PC. But at all-times, the advantages consoles offer, are their closed standardized hardware which allows for better optimization and the OEM driven cost which means that for an equivalent amount of performance it would always cost less getting a console than building a PC.
 

GhostTrick

Member
Oct 25, 2017
11,316
No, the FE is not representative for the 2080, just as the 50th anniversary edition isn't representative for the 5700 XT. Besides we're talking about the real world average clock frequency. None of the links you provide tell us that. You can spin it any way you want, but the benchmark indicates a 18% performance lead for the 2080 over the 5700 XT, which is what matters.


Of course it does. Those are clocks you get in game.
Heck, let's make it more simple:
unknown.png


As for the PS5 being "More than 18% faster than 5700XT". How do you know that ?
 

Straffaren666

Member
Mar 13, 2018
84
Of course it does. Those are clocks you get in game.
Heck, let's make it more simple:
unknown.png


As for the PS5 being "More than 18% faster than 5700XT". How do you know that ?

Because were talking about performance based on the benchmark. The performace of the 2080 is what it is, from more than 100 thousand samples, regardless of the clock frequency of the 2080 in the DB. If the 2080 actually is running above the advertised boost clock (which it actually seems to do), it only means the efficiency of the Turing architecture actually is less than what I assumed.

I don't "know" if the PS5 will be 18% faster than a 5700 XT, but it's reasonable to believe it will be that, at least, based on the clock frequency of the PS5 being about 23% higher than the game clock of the 5700 XT and from the IPC improvements of the RDNA 2 architecture. It's easier to extrapolate the performance of the PS5 from the 5700 XT, than the performance of the XSX, since the number of GPU resources of the PS5 will be similar to the number of GPU resources of the 5700 XT and the performance in that case will scale linearly with the clock frequency.
 

GhostTrick

Member
Oct 25, 2017
11,316
Because were talking about performance based on the benchmark. The performace of the 2080 is what it is, from more than 100 thousand samples, regardless of the clock frequency of the 2080 in the DB. If the 2080 actually is running above the advertised boost clock (which it actually seems to do), it only means the efficiency of the Turing architecture actually is less than what I assumed.

I don't "know" if the PS5 will be 18% faster than a 5700 XT, but it's reasonable to believe it will be that, at least, based on the clock frequency of the PS5 being about 23% higher than the game clock of the 5700 XT and from the IPC improvements of the RDNA 2 architecture. It's easier to extrapolate the performance of the PS5 from the 5700 XT, than the performance of the XSX, since the number of GPU resources of the PS5 will be similar to the number of GPU resources of the 5700 XT and the performance in that case will scale linearly with the clock frequency.


And 5700XT has 40CUs against 36 for the PS5.
So in the end, you end up with similar performances.
 

Straffaren666

Member
Mar 13, 2018
84
And 5700XT has 40CUs against 36 for the PS5.
So in the end, you end up with similar performances.

Whether the performance of the PS5 and a 5700XT will be similar is of course subjective. Based on the game clocks, the 5700XT is a 8.8TF GPU and the PS5 is a 10.3TF GPU. There will be architectural improvements in RDNA 2 as well. If you think a 5700XT and a PS5 have similar performance, then PS5 definitely will have similar performance to a 2080.

I must have mixed up the spec chart of the 5700 and 5700XT when I claimed the PS5 having a 23% higher clock frequency. The performance delta, based on TF between the 5700XT and PS5, is ~17%. Then there will be architectural improvements from RDNA 2. On the down side there will be a BW loss due to shared memory with the CPU/SSD/Audio. All in all, there's good reason to believe the performance level to be comparable to a 2080 and ~15% above the 5700XT.
 

GhostTrick

Member
Oct 25, 2017
11,316
Whether the performance of the PS5 and a 5700XT will be similar is of course subjective. Based on the game clocks, the 5700XT is a 8.8TF GPU and the PS5 is a 10.3TF GPU. There will be architectural improvements in RDNA 2 as well. If you think a 5700XT and a PS5 have similar performance, then PS5 definitely will have similar performance to a 2080.

I must have mixed up the spec chart of the 5700 and 5700XT when I claimed the PS5 having a 23% higher clock frequency. The performance delta, based on TF between the 5700XT and PS5, is ~17%. Then there will be architectural improvements from RDNA 2. On the down side there will be a BW loss due to shared memory with the CPU/SSD/Audio. All in all, there's good reason to believe the performance level to be comparable to a 2080 and ~15% above the 5700XT.


In what world RX 5700XT is a 8.8Tflops GPU ?
At regular game clocks, it's at 9.2Tflops. 9.75Tflops at boost clocks. That's for the regular model. Most non reference models goes up to 2Ghz, putting it at 10.2.

5700XT has 40CU, not 36.

 

darthkarki

Banned
Feb 28, 2019
129
I have never quite understood that sentiment. Or the basis behind it... I thikits generally short sighted of anyone having it.

Like PCs, consoles are also just hardware. And there is a totally quantifiable performance characteristic or level that can be measured between the two.

Eg. The upcoming next-gen consoles may very well end falling in line with what would be a mid-range PC. Or somewhere in between a mid-range and high-end PC because the software can be better optimized for consoles. In 2 years time, these upcoming next-gen consoles become equivalent to a low end PC. But at all-times, the advantages consoles offer, are their closed standardized hardware which allows for better optimization and the OEM driven cost which means that for an equivalent amount of performance it would always cost less getting a console than building a PC.

This was an interesting experiment for me. Now that we have the specs, the suggestion that these consoles could be around a "mid-range" PC stands out as such an absurd idea I had to take a look, so I reviewed the steam hardware survey.

GPU:
At the moment the best guess/evidence is that the new consoles' GPUs will be around RTX 2080 level. I personally have no doubt they'll soon be performing better than that, just like every console in the past over time performs better than the PC GPUs they initially seemed most similar to. But anyway:

Percent of users with a 2080 or better: 2.04%

That means these consoles are in the top 2 percent. Better than 98% of PCs out there. That's not mid-range. That's not even high-end. That's top-end.

But perhaps you meant the positioning of the GPU in the range of current available parts, not actual usage. Well, there is also exactly one graphics card above the 2080: the 2080 Ti. Also, there's the little detail that the 2080 is $600+. That's most likely more than either of these consoles will cost in their entirety. Again, that's not high-end, that's extremely high-end.

As for what to consider "mid-range", the most popular card is the GTX 1060 at 12.68%. Needless to say this is not even close to the power bracket the 2080 is in.

CPU:
This was pretty interesting to me. 72.51% are 4 cores or less. 93.68% are 6 or less. Only 6.32% are 8 core or more, and not all of those have hyperthreading. There is not as detailed of a breakdown of CPUs as there are for GPUs, meaning even this percentage of 8 cores is including things like the AMD FX-8350, which, well... lol.

These consoles are in the top 6 percent, better than 94% of PCs (just in core count, much higher when you consider architecture).

For clock speeds again it's tough to get detailed info, but 73.33% are 3.29 GHz or lower, 26.67% are 3.3 or higher.

The consoles are somewhere in the middle of the top 26% for clock speed, again with the latest, most efficient architecture.

They are similar to the 3700x (probably a little slower). That's a $300 CPU, with the only higher-end parts being the 3800x (which is similar, just clocked a little higher), the 3900x, and the 3950x.

Those Ryzen 9s are $500+. That's not high-end, that's EXTREMELY high end, enthusiast/professional level. So again, the consoles are basically top-end for a gaming PC.

=============

Now as silly as it seemed to claim next-gen consoles would be "mid-range", this was even better:

In 2 years time, these upcoming next-gen consoles become equivalent to a low end PC.

A low-end PC is something like the Intel Pentium G5400 or the AMD Ryzen 3 2200G, and... well:

iGPU.png


These aren't even close to last-gen consoles, not by a long shot. You seriously think that in a couple years, low-end integrated graphics will be 2080 level? No integrated graphics will be even close to next-gen console level at the end of that generation, let alone a couple years in.

Anyway, this was fun, thanks for giving me the idea. I knew these consoles were powerful, but even I didn't realize how extraordinarily high-end they really were. This isn't even counting the bespoke customizations for I/O where especially the PS5 will be far beyond anything possible on the PC, and likely for a good long while. Even when PC SSDs get to the base speed of the PS5, that doesn't include all the customizations that remove the bottlenecks and slowdowns PCs will still have. Remember Cerny said the decompressor was worth what, 9 Zen 2 cores? The very highest-end PC CPUs and GPUs will surely be faster, but they will also have to be used for these other functions that the consoles have dedicated hardware for.
 

Fafalada

Member
Oct 27, 2017
3,067
Am I out of line to think that as the next gen goes along that compression techniques and methods might improve so they can achieve or closely achieve those high peak figures more often?
Not likely - lossless compression is more subject to how compressible source is than much else. There are things you can do with data layout to help facilitate some efficiency gains (which is what BCPack does afaik) but we're still talking small differences here.
Also note that while the ballpark estimates of 50% average for game-data when run through LZ derivative compressor rings true 'IME' - that won't necessarily be representative of 'nex-gen' datasets.
 

AegonSnake

Banned
Oct 25, 2017
9,566
I believe he's using the performance of Gears 5 on the XSX, which when shown for DF was running at a comparable level to a 2080, when he thinks the performance of the PS5 will be lower than for the 2080. However, based on that RDNA (1) and Turing have a similar performance/TF (when comparing the actual game clock) and that PS5 and 2080 both are ~10.3 TF, it's reasonable to believe the PS5 will be at a similar performance level to a 2080. The BW available to the PS5 GPU will be lower than for a 2080, since it'll be shared with the CPU/SSD/Audio, but there will probably be a 5-10% IPC improvement for RDNA 2 which should balance out the BW deficit.
but the gears 5 is outperforming the 2080 ti on series x. its running at naitve 4k 60 fps locked on ultra settings with several additional graphics effects that are not even in the pc version.

DA9ANPmJDKhSgzNxMAUj9W.png


a 10.3 tflops ps5 gpu should be around 2080 level at the very least. maybe even 2080 super level if we go by gears 5 and tbh we should because thats out only form of comparison so far with the exception of the minecraft ray tracing demo but we dont know what its exact framerate was.

its crazy to me that dictator continues to make these absolute statements as if hes seen how rdna 2.0 cards perform. as if hes seen ps5 games in action. the only point of comparison we have shows an amazing result matching if not exceeding the 2080 ti which is almost 35% more powerful than the rtx 2080. an 18% gap in tflops should not get us all the way down to 2070 levels.
 

Straffaren666

Member
Mar 13, 2018
84
In what world RX 5700XT is a 8.8Tflops GPU ?
At regular game clocks, it's at 9.2Tflops. 9.75Tflops at boost clocks. That's for the regular model. Most non reference models goes up to 2Ghz, putting it at 10.2.

5700XT has 40CU, not 36.


The game clock is 1755Mhz, which results in ~9TF. My bad, I don't know how I got that to 8.8TF. There seems to be some OC cards out as well with a game clock of 1795Mhz, which results in ~9.2TF. The distribution of reference cards vs OC cards in the DB of gpu.userbenchmark.com is unknown, but let's assume it's 50/50. The boost clock is not really relevant nor is the base clock. So based on a 18% performance lead for the 2080 over a 9.1TF 5700XT from the gpu.userbenchmark.com DB, the performance lead for the 2080 should be ~4% (1.18 * 9.1 / 10.3) over a corresponding 5700XT, linearly extrapolated to 10.3TF . Then we don't take the RDNA 2 improvements nor the BW deficit into account. There is also a big uncertainty in what the actual average clock frequency is for the 5700XT in the benchmark. I still think there's good reason to believe the performance level to be comparable to a 2080.
 
Oct 27, 2017
7,695
Worth the cost of what? A marketing number?

I think you're underestimating the die size difference. I suspect that Sony would be able to produce 4 PS5s for every 3 XSX. Sony needs more volume than MS because they will have higher demand for their consoles. However, if Lockhart truly is a thing, and has 20CUs, that does throw an interesting wrench into the situation since Microsoft could conceivably build a ton of consoles. But it also means that Microsoft has chosen to sabatoge their 12TF console with Lockhart. Which suggests to me that the XSX is more about winning a marketing bulletpoint battle than actually delivering top tier, cutting edge visuals and Sony simply weren't willing to make that sacrifice so they stuck with a powerful 1 SKU strategy that is a proven winner.

Also, these consoles are essentially limited by the constraints of the power envelope. Cerny may have decided to reach that power ceiling with a smaller die clocked as fast as possible, knowing that while the theoretical TF wouldn't match a wide and slow approach, it would have many more benefits due to the increased clockspeed, and having it clocked higher and getting better utilization with much faster clockspeeds made more sense with the way they architected the system.

Further, the difference in theoretical performance is the smallest of any console generation in history. 17% is not going to amount to a discernible difference. A 120% SSD performance advantage will. It's the biggest difference between the two consoles and I suspect that Sony made the wise decision to emphasize that even if it ended up sacrificing certain parts of their design.
Don't forget that the die size may not differ so drastically. Sony has chosen to use some of that die area differently: custom hardware accelerators largely having to do with accelerating i/o as we've seen in the following slide time and time again:
gsmarena_006.jpg

Not many people here understand or want to admit it because it's not as sexy sounding as a giant gpu, but this will materially make a more significant difference to the overall gaming experience in terms of game design scope and interactivity this gen.

To your last point, remember Sony did this because this is exactly what console game devs wanted. They wanted this exactly because it will make the most significant difference.
 
Last edited:

Pheonix

Banned
Dec 14, 2018
5,990
St Kitts
This was an interesting experiment for me. Now that we have the specs, the suggestion that these consoles could be around a "mid-range" PC stands out as such an absurd idea I had to take a look, so I reviewed the steam hardware survey.

GPU:
At the moment the best guess/evidence is that the new consoles' GPUs will be around RTX 2080 level. I personally have no doubt they'll soon be performing better than that, just like every console in the past over time performs better than the PC GPUs they initially seemed most similar to. But anyway:

Percent of users with a 2080 or better: 2.04%

That means these consoles are in the top 2 percent. Better than 98% of PCs out there. That's not mid-range. That's not even high-end. That's top-end.

But perhaps you meant the positioning of the GPU in the range of current available parts, not actual usage. Well, there is also exactly one graphics card above the 2080: the 2080 Ti. Also, there's the little detail that the 2080 is $600+. That's most likely more than either of these consoles will cost in their entirety. Again, that's not high-end, that's extremely high-end.

As for what to consider "mid-range", the most popular card is the GTX 1060 at 12.68%. Needless to say this is not even close to the power bracket the 2080 is in.

CPU:
This was pretty interesting to me. 72.51% are 4 cores or less. 93.68% are 6 or less. Only 6.32% are 8 core or more, and not all of those have hyperthreading. There is not as detailed of a breakdown of CPUs as there are for GPUs, meaning even this percentage of 8 cores is including things like the AMD FX-8350, which, well... lol.

These consoles are in the top 6 percent, better than 94% of PCs (just in core count, much higher when you consider architecture).

For clock speeds again it's tough to get detailed info, but 73.33% are 3.29 GHz or lower, 26.67% are 3.3 or higher.

The consoles are somewhere in the middle of the top 26% for clock speed, again with the latest, most efficient architecture.

They are similar to the 3700x (probably a little slower). That's a $300 CPU, with the only higher-end parts being the 3800x (which is similar, just clocked a little higher), the 3900x, and the 3950x.

Those Ryzen 9s are $500+. That's not high-end, that's EXTREMELY high end, enthusiast/professional level. So again, the consoles are basically top-end for a gaming PC.

=============

Now as silly as it seemed to claim next-gen consoles would be "mid-range", this was even better:



A low-end PC is something like the Intel Pentium G5400 or the AMD Ryzen 3 2200G, and... well:

iGPU.png


These aren't even close to last-gen consoles, not by a long shot. You seriously think that in a couple years, low-end integrated graphics will be 2080 level? No integrated graphics will be even close to next-gen console level at the end of that generation, let alone a couple years in.

Anyway, this was fun, thanks for giving me the idea. I knew these consoles were powerful, but even I didn't realize how extraordinarily high-end they really were. This isn't even counting the bespoke customizations for I/O where especially the PS5 will be far beyond anything possible on the PC, and likely for a good long while. Even when PC SSDs get to the base speed of the PS5, that doesn't include all the customizations that remove the bottlenecks and slowdowns PCs will still have. Remember Cerny said the decompressor was worth what, 9 Zen 2 cores? The very highest-end PC CPUs and GPUs will surely be faster, but they will also have to be used for these other functions that the consoles have dedicated hardware for.
Errr...... great post, but I think you got what I was saying totally wrong.

I am not speaking for, but speaking against those that say things like, these consoles can't compare to PCs, or they are low range, mid-range...etc.

The example I gave, was based on using the raw TF number of the consoles based on how the PC audience interprets them. By the time these consoles are released, there would be GPUs from AMD and Nvidia. Based on their product stack, I expect these consoles would fall into the performance bracket of what the PC guys would call mid-range. I am saying even if they do that, it doesn't matter.

Trust me, you don't want to get me started on how hypocritical think most f the PC gamers are, because if you were to just read what they say you would think every single PC s running a 2080ti with a 16 core CPU and at 4K 144fpsorsme shit like that.
 

GhostTrick

Member
Oct 25, 2017
11,316
The game clock is 1755Mhz, which results in ~9TF. My bad, I don't know how I got that to 8.8TF. There seems to be some OC cards out as well with a game clock of 1795Mhz, which results in ~9.2TF. The distribution of reference cards vs OC cards in the DB of gpu.userbenchmark.com is unknown, but let's assume it's 50/50. The boost clock is not really relevant nor is the base clock. So based on a 18% performance lead for the 2080 over a 9.1TF 5700XT from the gpu.userbenchmark.com DB, the performance lead for the 2080 should be ~4% (1.18 * 9.1 / 10.3) over a corresponding 5700XT, linearly extrapolated to 10.3TF . Then we don't take the RDNA 2 improvements nor the BW deficit into account. There is also a big uncertainty in what the actual average clock frequency is for the 5700XT in the benchmark. I still think there's good reason to believe the performance level to be comparable to a 2080.




Not really.
www.techpowerup.com

XFX Radeon RX 5700 XT THICC III Ultra Review

The XFX RX 5700 XT THICC III Ultra is a brand new triple-fan design from the company, which runs higher clocks, too. XFX listened to criticism and improved the memory cooling plate, reducing temperatures and noise levels significantly, which makes the THICC III one of the quietest RX 5700 XT...

15% difference at 4k and that's with a model that average around 2Ghz of clockspeed.

And stop saying clockspeeds are uncertain.
I kept providing you links with clockspeed tests in game with averages, peak and such.
 

Dictator

Digital Foundry
Verified
Oct 26, 2017
4,931
Berlin, 'SCHLAND
but the gears 5 is outperforming the 2080 ti on series x. its running at naitve 4k 60 fps locked on ultra settings with several additional graphics effects that are not even in the pc version.

DA9ANPmJDKhSgzNxMAUj9W.png


a 10.3 tflops ps5 gpu should be around 2080 level at the very least. maybe even 2080 super level if we go by gears 5 and tbh we should because thats out only form of comparison so far with the exception of the minecraft ray tracing demo but we dont know what its exact framerate was.

its crazy to me that dictator continues to make these absolute statements as if hes seen how rdna 2.0 cards perform. as if hes seen ps5 games in action. the only point of comparison we have shows an amazing result matching if not exceeding the 2080 ti which is almost 35% more powerful than the rtx 2080. an 18% gap in tflops should not get us all the way down to 2070 levels.
I cannot believe I have to come out of my hiding to expressly yell how damn wrong you are. The gears 5 bench ran at Ultra settings, not at higher than Ultra Settings. Watch the damn Video or read the damn article where we say that. There at exactly the same settings as Ultra, it ran like a 2080. The exact same settings as Ultra. How often do I have to correct purposefully misconstrued Information.

Read
The
Damn
Articles
Watch
The
Damn
Videos