• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.
Status
Not open for further replies.

Chettlar

Member
Oct 25, 2017
13,604
Also, who is to say how big or small the dedicated RTX hardware will be? What if the hardware dedicated to next gen console raytracing is more intensive than RTX? Like, we literally don't know how they will be doing it right? This is AMD, not Nvidia, so they may be tackling it a new way. I mean navi doesn't even have it's own version of RTX yet. For consoles, we know they will, but all they have said is that there will be dedicated raytracing hardware. That's it.

For all we know there could be a dedicated solid chip on the motherboard dedicated solely to handling raytraced lighting all on it's own. Likely? Uh well I'm not an expert but I'm gonna say no. That seems like it'd be expensive.

But point is, like, we seem to have no idea how exactly they are going to handle raytracing. Just that they will on a hardware level.

Unless we do know more and I've missed it? Hasn't AMD kind of been rather hush on the topic? Plus, what if Microsoft/Sony are putting in serious R&D themselves into that hardware. We know Microsoft already is putting solid work into making RTX a thing alongside Nvidia with their whole partnership thing. I would assume they'd have the engineers to be able to work with AMD for a solution.

This is an inquisitive post. I'm curious if what I'm saying is sensible or ignorant rubbish.
 

melodiousmowl

Member
Jan 14, 2018
3,774
CT
Or the thing in the Sony patent, but yeah. We're definitely not getting something bog-standard.

the sony patent was about optimizations to certain kinds of reads where a standard solution can't saturate the bus. pci3 ssd is what, max 3gb/s on the high end? (bus limited)

no clue if with pcie4 you can get the same read speeds with standard equipment. or if optimizations can be done somewhere else, but my bet is the ssd will be soldered and will be partly custom, and the cost of that will drive it to be soldered.

they could have an nvme slot for consumer upgrades to overall storage, but unless the ssd is bog standard, that will still mean shuffling files when the game needs to run.
 

Guerrilla

Member
Oct 28, 2017
2,234
A PCIe 4.0 SSD should be able to fill the entirety of 16gb of ram within 4 - 5 seconds.
I'm still really skeptical on these claims.
So I have an 970evo NVMe which gets 3.5GB/s when I run a speed test. However, big Games do not load in 4-5 Seconds. The actual loading time difference copmared to a SATA3 SSD is marginal in most cases. Can someone explain to me why this is/would be different for PS5 Games?
On top of that, a 500GB drive at that speed would still be close to a hundred bucks, no? So I think 1TB would probably be too expensive to put in a console, which means we would still need larger, slower storage and at least have longer inital loading times when we start games since data has to be pushed from the slower drive to the faster one, no?
 

BreakAtmo

Member
Nov 12, 2017
12,805
Australia
I'm still really skeptical on these claims.
So I have an 970evo NVMe which gets 3.5GB/s when I run a speed test. However, big Games do not load in 4-5 Seconds. The actual loading time difference copmared to a SATA3 SSD is marginal in most cases. Can someone explain to me why this is/would be different for PS5 Games?
On top of that, a 500GB drive at that speed would still be close to a hundred bucks, no? So I think 1TB would probably be too expensive to put in a console, which means we would still need larger, slower storage and at least have longer inital loading times when we start games since data has to be pushed from the slower drive to the faster one, no?

It's different because the games will actually be designed around the SSDs rather than HDDs. And PC SSD prices have virtually nothing to do with what Sony will pay for flash chips bought in bulk and soldered to the board.
 

JahIthBer

Member
Jan 27, 2018
10,371
Also, who is to say how big or small the dedicated RTX hardware will be? What if the hardware dedicated to next gen console raytracing is more intensive than RTX? Like, we literally don't know how they will be doing it right? This is AMD, not Nvidia, so they may be tackling it a new way. I mean navi doesn't even have it's own version of RTX yet. For consoles, we know they will, but all they have said is that there will be dedicated raytracing hardware. That's it.

For all we know there could be a dedicated solid chip on the motherboard dedicated solely to handling raytraced lighting all on it's own. Likely? Uh well I'm not an expert but I'm gonna say no. That seems like it'd be expensive.

But point is, like, we seem to have no idea how exactly they are going to handle raytracing. Just that they will on a hardware level.

Unless we do know more and I've missed it? Hasn't AMD kind of been rather hush on the topic? Plus, what if Microsoft/Sony are putting in serious R&D themselves into that hardware. We know Microsoft already is putting solid work into making RTX a thing alongside Nvidia with their whole partnership thing. I would assume they'd have the engineers to be able to work with AMD for a solution.

This is an inquisitive post. I'm curious if what I'm saying is sensible or ignorant rubbish.
It's either going to be AMD's so called hybrid solution or their own, there is no way to know at the moment.
I think in general Nvidia's ray tracing tech was a surprise that the others have rushed to also get something out.
 

zombiejames

Member
Oct 25, 2017
11,912
I'm still really skeptical on these claims.
So I have an 970evo NVMe which gets 3.5GB/s when I run a speed test. However, big Games do not load in 4-5 Seconds. The actual loading time difference copmared to a SATA3 SSD is marginal in most cases. Can someone explain to me why this is/would be different for PS5 Games?

Because it's not about load times. It's about developers building games from the ground up without having to design for the limitations of a mechanical hard drive. Look at this for a great example (at the 3m48s mark if it doesn't load there):

 
OP
OP
Mecha Meister

Mecha Meister

Next-Gen Guru
Member
Oct 25, 2017
2,800
United Kingdom
I'm still really skeptical on these claims.
So I have an 970evo NVMe which gets 3.5GB/s when I run a speed test. However, big Games do not load in 4-5 Seconds. The actual loading time difference copmared to a SATA3 SSD is marginal in most cases. Can someone explain to me why this is/would be different for PS5 Games?
On top of that, a 500GB drive at that speed would still be close to a hundred bucks, no? So I think 1TB would probably be too expensive to put in a console, which means we would still need larger, slower storage and at least have longer inital loading times when we start games since data has to be pushed from the slower drive to the faster one, no?

From my research, with the advances in storage technology it appears that the loading speed of games has now become predominantly CPU bound, and some games rely on single-threaded performance for loading, this means that they're not taking advantage of the multiple cores available.

I can foresee developments in this area, where games will be designed to take further advantage of the available cores in the loading process. File systems may also play a role in this.

I could be mistaken, but my understanding of the patent is that Sony have been working on some technology that works with the SSD which will feature some-kind of decompression cores which will handle the decompression of data. This is usually handled by the CPU, if this is correct, comes to fruition and integrates well with the system the results will be interesting to see.

Edit - Expanded post.
 
Last edited:

Deleted member 10747

User requested account closure
Banned
Oct 27, 2017
1,259
On the hardware side, my understanding from the patent is that Sony have been working on some technology that works with the SSD which will feature some-kind of decompression cores which will handle the decompression of data. This is usually handled by the CPU.
Didn't Cerny say that it was a 2 side solution? Software stack based on there playgo system and the other one was hardware I/O optimalization? OR am i just remembering wrong.
 

Pottuvoi

Member
Oct 28, 2017
3,062
Speaking of SSD's, we are all assuming it's a PCIE 4.0 SSD, we haven't heard any specs about it despite devkits being out in the wild & everyone is just kinda assuming it's PCIE 4.0 at this point, it could just be 3.0.
If it's custom it also can be something silly like PCIE 3.0 x16. ;)
Didn't Cerny say that it was a 2 side solution? Software stack based on there playgo system and the other one was hardware I/O optimalization? OR am i just remembering wrong.
I'm quite sure that the software stack that Cerny talked about will play quite big part on how fast it actually will be.
This patent seems to suggest quite bit changes to what we normally are used to in PC side of SSDs. (Not confirmed that this is what we see in Ps5.)
 

Deleted member 20297

User requested account closure
Banned
Oct 28, 2017
6,943
From my research, with the advances in storage technology it appears that the loading speed of games has now become predominantly CPU bound, and some games rely on single-threaded performance for loading, this means that they're not taking advantage of the multiple cores available.

I can foresee developments in this area, where games will be designed to take further advantage of the available cores in the loading process. File systems may also play a role in this.

I could be mistaken, but my understanding of the patent is that Sony have been working on some technology that works with the SSD which will feature some-kind of decompression cores which will handle the decompression of data. This is usually handled by the CPU, if this is correct, comes to fruition and integrates well with the system the results will be interesting to see.

Edit - Expanded post.
Both Microsoft and Sony had hardware decompression chips for several compression algorithms implemented this gen.
 

JahIthBer

Member
Jan 27, 2018
10,371
From my research, with the advances in storage technology it appears that the loading speed of games has now become predominantly CPU bound, and some games rely on single-threaded performance for loading, this means that they're not taking advantage of the multiple cores available.

I can foresee developments in this area, where games will be designed to take further advantage of the available cores in the loading process. File systems may also play a role in this.

I could be mistaken, but my understanding of the patent is that Sony have been working on some technology that works with the SSD which will feature some-kind of decompression cores which will handle the decompression of data. This is usually handled by the CPU, if this is correct, comes to fruition and integrates well with the system the results will be interesting to see.

Edit - Expanded post.
The CPU bound part is very true & it's why Denuvo can increase loading times, i wouldn't be surprised if say PS5 has better loading times than many PC versions with Denuvo even if you have a super $500 PCIE 4.0 SSD.
Denuvo sucks man lol.
 

Chamon

Member
Feb 26, 2019
1,221
Soapbox: "someone who addresses their views with an amount of passion is said to be standing on their soapbox."

It's just an opinion post or a blog, whatever you want to call it. A forum thread with good grammar if you prefer. You people are absolutely ruthless sometimes. XD

Except for the clickbait name of the article 😉
 

sncvsrtoip

Banned
Apr 18, 2019
2,773
40CU GPU at 2.0ghz likely beats out a 60CU GPU at 1.6ghz in game performance, since there will be finishing returns with teraflops. The 2.0ghz GPU has a 25% advantage in pixel engine and ROP perf.

Part of the reasons why AMD loses out to Nvidia cards in the past despite having much higher flops.
I doubt 60cu navi would have the same amount of tmus and rops as 40cu
 

MrKlaw

Member
Oct 25, 2017
33,026
How much of the loading being CPU bound is because they have to highly compress to counter slow HDD access speeds? With SSD you might not need to compress so much, meaning less heavy decompression

also for loading during levels (lift ride, slow walking with a chat) - you can only give limitedCPU time to the decompression because you're still having to draw your fake scenes to mask the loading

those could both change a lot with a dedicated SSD
 

DrKeo

Banned
Mar 3, 2019
2,600
Israel
From my research, with the advances in storage technology it appears that the loading speed of games has now become predominantly CPU bound, and some games rely on single-threaded performance for loading, this means that they're not taking advantage of the multiple cores available.

I can foresee developments in this area, where games will be designed to take further advantage of the available cores in the loading process. File systems may also play a role in this.

I could be mistaken, but my understanding of the patent is that Sony have been working on some technology that works with the SSD which will feature some-kind of decompression cores which will handle the decompression of data. This is usually handled by the CPU, if this is correct, comes to fruition and integrates well with the system the results will be interesting to see.

Edit - Expanded post.
That's very true, CPU is one of the main reasons for the PS4 and One long loading times.
You will probably not find single 5700xt with clocks under 1800mhz on avarage ;)
That's a bold claim coming from two samples out of hundreds of thousands of cards. If you couldn't find a card under 1800Mhz, AMD would have claimed for 1800Mhz+ gaming clock :)
 

sncvsrtoip

Banned
Apr 18, 2019
2,773
That's very true, CPU is one of the main reasons for the PS4 and One long loading times.

That's a bold claim coming from two samples out of hundreds of thousands of cards. If you couldn't find a card under 1800Mhz, AMD would have claimed for 1800Mhz+ gaming clock :)
Nah, the difference between anandtech and techpowerup was probably gameset not silicon, difference between silicon occurs on oc. Nvidia also claim in rt series lower clocks than in real scenario.
 

chris 1515

Member
Oct 27, 2017
7,074
Barcelona Spain
How much of the loading being CPU bound is because they have to highly compress to counter slow HDD access speeds? With SSD you might not need to compress so much, meaning less heavy decompression

also for loading during levels (lift ride, slow walking with a chat) - you can only give limitedCPU time to the decompression because you're still having to draw your fake scenes to mask the loading

those could both change a lot with a dedicated SSD

On Sony patent, they use a secondary CPU to unload the main CPU and enough hardware decompressor to follow the SSD speed during streaming and compression is useful for reduce size of game.

github.com

GitHub - IridiumIO/CompactGUI: Transparently compress active games and programs using Windows 10/11 APIs

Transparently compress active games and programs using Windows 10/11 APIs - IridiumIO/CompactGUI

Coming from B3D and some try and it reduce size of games for example Fortnite is size goes from 18.8 GB to 10.2 GB.

Yep, see for example this tool: https://github.com/ImminentFate/CompactGUI
Previously they had listed more examples than Fortnite, but often just 20% of original game size. Ofc. lots of it might come from duplicated files for optimized streaming

Use this tool to:

  • Reduce the size of games (e.g. Fortnite: 18.8GB > 10.2GB)
  • Reduce the size of programs (e.g. Adobe Photoshop: 1.71GB > 886MB)
  • Compress any other folder on your computer

Yes, that's using the built-in compact.exe command and the new NTFS 'CompactOS' file compression that's been available since Windows 8. It uses modified LZ77 algorithm with Huffman coding and a much larger dictionary. It's also available to applications through Windows Compression API (which additionally includes ZIP file archives).

The command to compress all files in the current directory would be compact /c /EXE[:algorithm] , where algorithm is one of XPRESS4K | XPRESS8K (default) | XPRESS16K | LZX

Compression only persists during read-only access - any write will automatically decompress the file.

This is very clever and help with a 1TB SSD and it is additional to non duplicate file.
 
Last edited:

Carn

Member
Oct 27, 2017
11,904
The Netherlands
I'm still really skeptical on these claims.
So I have an 970evo NVMe which gets 3.5GB/s when I run a speed test. However, big Games do not load in 4-5 Seconds. The actual loading time difference copmared to a SATA3 SSD is marginal in most cases. Can someone explain to me why this is/would be different for PS5 Games?
On top of that, a 500GB drive at that speed would still be close to a hundred bucks, no? So I think 1TB would probably be too expensive to put in a console, which means we would still need larger, slower storage and at least have longer inital loading times when we start games since data has to be pushed from the slower drive to the faster one, no?

In the end, it's about the bandwith that is available. Games will always need an initial loading; but having a lot of bandwith available also has massive impact of what you can show on screen and when you can show it. Just a basic SSD increases that massively, so developers can start creating games with that baseline in mind. Next to that, Sony seems to be working on a custom solution to optimise things. That will probably mean specific work done on the I/O stack, but it could also mean that the SSD drive might actually be soldered onto the motherboard for all we know.; or their might be specific optimisations in the filesystem (because for a console; most of the use cases are write-once / read-many).
 

BreakAtmo

Member
Nov 12, 2017
12,805
Australia
[OT8] Better call Soule

[OT8] Better Call Souls

That's very true, CPU is one of the main reasons for the PS4 and One long loading times.

Is there any particular reason why Sony and Microsoft couldn't employ the trick Nintendo recently added to the Switch, where the CPU overclocks while loading to speed up loading times? I was hoping we could see the Jaguars spiking to 3GHz for a few seconds.
 

Andromeda

Member
Oct 27, 2017
4,839
Both Microsoft and Sony had hardware decompression chips for several compression algorithms implemented this gen.
That was Zlib decompression hardware used for slow streams of data stored to the HDD. We are talking about an entirely different kind of processing power to decompress 5GB/s streams that will come from nand memory. About 50 times faster.
 

DrKeo

Banned
Mar 3, 2019
2,600
Israel
Nah, the difference between anandtech and techpowerup was probably gameset not silicon, difference between silicon occurs on oc. Nvidia also claim in rt series lower clocks than in real scenario.
A console GPU is always underclocked compared to it's PC counterpart official boost/turbo/gaming clock. Silicon lottery and different clock test method are pretty much irrelevant, just like under-volting.

whatever GPU the PS5 will have, it will have disabled CUs and lower clocks than its PC part counterpart.


[OT8] Is there any particular reason why Sony and Microsoft couldn't employ the trick Nintendo recently added to the Switch, where the CPU overclocks while loading to speed up loading times? I was hoping we could see the Jaguars spiking to 3GHz for a few seconds.
Possible, but don't expect the huge bump that the Switch got because the Switch CPU was extremely under-clocked for its mobile mode. 3.2Ghz isn't as aggressive under-clock from the 4.4Ghz boost clock 3700X has as Switch.
 
Last edited:

anexanhume

Member
Oct 25, 2017
12,912
Maryland
Thanks for posting this. It's a really fascinating video. I didn't understand half of what he said but to see just how complex building the hardware and software for one of these devices is, it's amazing.
There's two main takeaways to summarize it. Game plaintext only ever appears on the internal SoC, and every peripheral has at least a key check with the SoC to verify its authenticity.
 

sncvsrtoip

Banned
Apr 18, 2019
2,773
A console GPU is always underclocked compared to it's PC counterpart official boost/turbo/gaming clock. Silicon lottery and different clock test method are pretty much irrelevant, just like under-volting.

whatever GPU the PS5 will have, it will have disabled CUs and lower clocks than its PC part counterpart.
I don't say you are not right here but just pointed that 5700xt is not under 1800mhz ;)
 

DrKeo

Banned
Mar 3, 2019
2,600
Israel
I mean, I'd be happy with pretty much any improvement so long as it didn't hurt the chips.
The CPU upgrade that we are getting is so big that I wouldn't worry in that regard :)

it's been 10 years since CPUs have been exciting, as a PC gamer i'm totally psyched. First Ryzen finally brought competition to the CPU market, next year consoles will get to the ballpark of gaming CPU speeds which will drive CPU requirements up. Exciting times, it's 2005 all over again.


I don't say you are not right here but just pointed that 5700xt is not under 1800mhz ;)
Well, I'm talking official numbers, I've never really bothered checking real world clocks for all GPUs and we won't be able to check the PS5 and Scarlett's real world GPU speeds anyway :)
 

sncvsrtoip

Banned
Apr 18, 2019
2,773
The CPU upgrade that we are getting is so big that I wouldn't worry in that regard :)

it's been 10 years since CPUs have been exciting, as a PC gamer i'm totally psyched. First Ryzen finally brought competition to the CPU market, next year consoles will get to the ballpark of gaming CPU speeds which will drive CPU requirements up. Exciting times, it's 2005 all over again.



Well, I'm talking official numbers, I've never really bothered checking real world clocks for all GPUs and we won't be able to check the PS5 and Scarlett's real world GPU speeds anyway :)
Officialy amd say peak tflops 9.75tf so 1904mhz I think consoles gpu clocks will be constatnt
 

DrKeo

Banned
Mar 3, 2019
2,600
Israel
Officialy amd say peak tflops 9.75tf so 1904mhz I think consoles gpu clocks will be constatnt
AMD changed their PR regarding clock numbers. Gaming clock was what they used to call boost clock and now boost clock is a new metric which means nothing because it's a clock the card might hit here and there for a microsecond, depends on how well your specific card did in the silicon lottery.

expecting a console to reach 2Ghz, a speed even AMD didn't dare to talk about, is a bit of a pipe dream unless their 7nm silicon matures in an unbelievable way. The safe assumption is 36CU and lower clocks than the 5700XT considering the 5700XT has 225W TBP.

If you look at the RX 5700, Anandtech highest results were 1700Mhz which makes much more sense for console binning levels. It actually represents a console pretty well, 40CU, 4 disabled, gaming clock is 1625Mhz and a total of 185W TBP.
 
Last edited:

AegonSnake

Banned
Oct 25, 2017
9,566
OG Xbox was sold for a 183.10$ loss adjusted to 2019, 360 was sold for a $164.28$ loss adjusted to 2019. Xbox One was actually MS's first console sold for a profit (in theory because BOM doesn't represent 100% of costs). After the trauma which was the One launch, I'm pretty doubtful that MS is going to go with that strategy again.

Selling to the core in the first year makes or breaks a console. MS can talk about services all day long, but if they are launching the same console at the same price point as Sony, they will get run over again. Maybe not as bad as last time, but still. Losing money on their hardware on launch isn't something that will anger shareholders, losing money the first year is a very standard thing in the industry. Gaining grounds, that's what important in the first year, not how much money you make. You want a lot of people buying your console fast so when the casuals come, all their expert friends already own your console, you want that landslide effect. The two years are too important, this gen they killed the One beyond repair, last-gen they built the 360.


People who would play in 1080p instead of 4K for 100$ less aren't counted on one hand, it's actually the other way around. We talk resolutions and tech all day long in our Era echo-chamber but in real life Wii outsold everything, Switch outsold everything and PS4 outsells Pro by 3:1. If at launch there is a next-gen console that plays everything next-gen but in 1080p for 100$, it will probably be a very popular SKU, just not amongst the Era crowd.




Regarding the power difference, if the CPU is the same, the memory is the same (with adjustments because 1080p requires less volume and bandwidth) and the GPU instruction set is the same, you will get the same game only in lower resolution or with some lower graphical settings. PC gamers had it for decades, there is a reason why NV sells both a 6.45TF 2060 RTX and a 16.3TF Titan RTX. Nintendo Switch is another example, you pull it out of the dock and you lose 50% of its GPU power, so the console cuts resolution and you get the exact same game running on a handheld because the CPU and memory setup is the same.

Regarding lowering the BOM, if we are going for an X2.5 less powerful GPU in the 1080p Lockheart, you can (let's assume we have a 5700 XT in the Anaconda):
- Cut the CU count from 40 to 20.
- Lower the clock from 1755Mhz to 1404Mhz.
- Cut the 320-bit interface to a 192-bit memory interface.
- Move from 14Gbps GDDR6 to 13Gbps GDDR6 (which will result in 312GB/s which is a lot for a 1080p console).
- Cut memory from 18GB to 12GB (if we assume ~18GB from Scarlett's X10 mixed chips design).
- Cheaper cooling.
- Cheaper power supply.
- Less I/O connections.
- Smaller form factor.
- Cheaper case (materials and finish).
- Cut SSD from 1TB to 500GB (makes sense because game sizes will be smaller and the target audience is more casual).

We are reducing the BOM by way more than 100$ here, I would even say ~150$, depends on how expensive Scarlett's SSD, memory and APU are.


There are three companies that I can think of that can actually make a good streaming infrastructure out of the gate because they are knee-deep in the server world and that's Amazon, Google and MS. There are three companies that can make a good gaming service out of the gate because they are knee-deep in gaming and that's Sony, MS, and Nintendo. The only company that appears at the intersection of that Van diagram is MS, so I wouldn't place Xcloud in the same boat as PSNow or Stadia.


High clocks are awesome, they also draw a lot of power and generate a lot of heat. The 5700 XT is (if we ignore the Vega VII rush-job) AMD's first 7nm card so I'm sure they will improve their 7nm performance by the end of 2020, but 5700 XT has shown us that 40CU at 2Ghz is way too much for a console right now in 2019. Who knows, maybe they will be able to achieve that, but you shouldn't take the Flute and Gonzalo leaks at face value. Getting a peek at two random lab testing out of hundreds doesn't really mean anything, just that AMD had tested the GPU at 1Ghz, 1.8Ghz, and 2Ghz at some point. It doesn't mean much regarding the final chip, just that at some point a guy at the lab forgot to pull the Ethernet cable out of the test machine while he conducted a 2Ghz benchmark.

Here is a random thought exercise - Can you think of any console that has a higher clock then it's PC counterpart? Because both 1.8Ghz and 2Ghz with 40CU are faster clocks than the 5700XT during gaming (according to AMD, gaming clock is 1755Mhz).
the fact that they even bothered doing a 2.0 ghz test after the 1.8 ghz gonzalo test bodes well for the gonzalo test right?

Also, the oberon leak wasn't from some lab test like the flute leak. Komachi found three clock profiles. 800, 911, 2.0. Why would they include that much information in a benchmark test?
 

sncvsrtoip

Banned
Apr 18, 2019
2,773
AMD changed their PR regarding clock numbers. Gaming clock was what they used to call boost clock and now boost clock is a new metric which means nothing because it's a clock the card might hit here and there for a microsecond, depends on how well your specific card did in the silicon lottery.

expecting a console to reach 2Ghz, a speed even AMD didn't dare to talk about, is a bit of a pipe dream unless their 7nm silicon matures in an unbelievable way. The safe assumption is 36CU and lower clocks than the 5700XT considering the 5700XT has 225W TBP.

If you look at the RX 5700, Anandtech highest results were 1700Mhz which makes much more sense for console binning levels. It actually represents a console pretty well, 40CU, 4 disabled, gaming clock is 1625Mhz and a total of 185W TBP.
Gaming clock means nothing as almost always gpu is higher clocked ;) What matter is real avarage results I wrote earlier. Btw 36 cu with 1625 is 7.45tf, even I would be dissapointed if we get such nextgen :d
 

Deleted member 20297

User requested account closure
Banned
Oct 28, 2017
6,943
That was Zlib decompression hardware used for slow streams of data stored to the HDD. We are talking about an entirely different kind of processing power to decompress 5GB/s streams that will come from nand memory. About 50 times faster.
The idea is the same, though. Of course this needs to be built for the higher bandwidth but the idea to accelerate decompression with dedicated hardware is not new to consoles and I think we can expect both companies trying to accelerate decompression like they did in the past already.
 

DrKeo

Banned
Mar 3, 2019
2,600
Israel
the fact that they even bothered doing a 2.0 ghz test after the 1.8 ghz gonzalo test bodes well for the gonzalo test right?

Also, the oberon leak wasn't from some lab test like the flute leak. Komachi found three clock profiles. 800, 911, 2.0. Why would they include that much information in a benchmark test?
Tests are tests, these are different silicon samples and they got tested loads of times in different configurations.

I actually don't know the Oberon leak. But I do know that RX 5700 spits blood when it goes over 1900Mhz and I find it hard to believe a console can hold a 2Ghz clock without burning your house down. Unless AMD's mature 7nm silicon in 2020 is that much better than today.


Gaming clock means nothing as almost always gpu is higher clocked ;) What matter is real avarage results I wrote earlier. Btw 36 cu with 1625 is 7.45tf, even I would be dissapointed if we get such nextgen :d
The results you've talked about are a bit selective, Anand got all sort of results in different games and they will tell you too that the results they've got are based on silicon lottery. And still, even then, they never got a 2Ghz average in anything and in some games they got sub 1800Mhz. With a different 5700 XT, they would have got different results.
 

sncvsrtoip

Banned
Apr 18, 2019
2,773
Tests are tests, these are different silicon samples and they got tested loads of times in different configurations.

I actually don't know the Oberon leak. But I do know that RX 5700 spits blood when it goes over 1900Mhz and I find it hard to believe a console can hold a 2Ghz clock without burning your house down. Unless AMD's mature 7nm silicon in 2020 is that much better than today.



The results you've talked about are a bit selective, Anand got all sort of results in different games and they will tell you too that the results they've got are based on silicon lottery. And still, even then, they never got a 2Ghz average in anything and in some games they got sub 1800Mhz. With a different 5700 XT, they would have got different results.
With default cooling 5700xt? The difference will be no more than 2%
 
Last edited:

AegonSnake

Banned
Oct 25, 2017
9,566
Tests are tests, these are different silicon samples and they got tested loads of times in different configurations.

I actually don't know the Oberon leak. But I do know that RX 5700 spits blood when it goes over 1900Mhz and I find it hard to believe a console can hold a 2Ghz clock without burning your house down. Unless AMD's mature 7nm silicon in 2020 is that much better than today.



The results you've talked about are a bit selective, Anand got all sort of results in different games and they will tell you too that the results they've got are based on silicon lottery. And still, even then, they never got a 2Ghz average in anything and in some games they got sub 1800Mhz. With a different 5700 XT, they would have got different results.
It could also be 7nm+



From what i recall, komachi thought this was a successor to the gonzalo chip and near final silicon.
 

Pheonix

Banned
Dec 14, 2018
5,990
St Kitts
AMD changed their PR regarding clock numbers. Gaming clock was what they used to call boost clock and now boost clock is a new metric which means nothing because it's a clock the card might hit here and there for a microsecond, depends on how well your specific card did in the silicon lottery.

expecting a console to reach 2Ghz, a speed even AMD didn't dare to talk about, is a bit of a pipe dream unless their 7nm silicon matures in an unbelievable way. The safe assumption is 36CU and lower clocks than the 5700XT considering the 5700XT has 225W TBP.

If you look at the RX 5700, Anandtech highest results were 1700Mhz which makes much more sense for console binning levels. It actually represents a console pretty well, 40CU, 4 disabled, gaming clock is 1625Mhz and a total of 185W TBP.
I agree with you on this one. I believe that 2Ghz clock was the dev kit. And I expect the console to be at 1800Mhz. Mostly because I expect them to be using a more mature 7nm node or maybe even 7nm+. So basically am expecting a 44CU [email protected] (48CU with 4 disabled).
 

DrKeo

Banned
Mar 3, 2019
2,600
Israel
I agree with you on this one. I believe that 2Ghz clock was the dev kit. And I expect the console to be at 1800Mhz. Mostly because I expect them to be using a more mature 7nm node or maybe even 7nm+. So basically am expecting a 44CU [email protected] (48CU with 4 disabled).
I also believe that PS5 will be based on a GPU we don't know yet, lets say RX 5800 XT with 48CU at ~1.9Ghz which on console will be 44CU at 1.7Ghz or 1.8Ghz.
 
Status
Not open for further replies.