How much money are you willing to pay for a next generation console?

  • Up to $199

    Votes: 33 1.5%
  • Up to $299

    Votes: 48 2.2%
  • Up to $399

    Votes: 318 14.4%
  • Up to $499

    Votes: 1,060 48.0%
  • Up to $599

    Votes: 449 20.3%
  • Up to $699

    Votes: 100 4.5%
  • I will pay anything!

    Votes: 202 9.1%

  • Total voters
    2,210
Status
Not open for further replies.

Sekiro

Member
Jan 25, 2019
2,938
United Kingdom
Cloud rendering, hmm..

My two cents but is this when MS/Sony can use their extensive global servers to provide their next gen consoles with some extra juice via the internet, such as processing power, teraflops, virtual RAM etc. to further the hardware limitations to beyond their capabilities using the power of the cloud.

Does this technology exist or did i just dream this up right now?
 
Oct 26, 2017
6,151
United Kingdom
It's an alternative to polygons for geometry handling. Dreams is - or was at some point - doing something like a kind of point cloud rendering. Nothing to do with lighting or ray tracing per se btw.

Dreams uses Signed Distance Fields and always has done, afaik.

Point cloud is what Euclideon was harping about for close to a decade and nothing ever came of it.

-----

I thought the Gonzalo 1600 number in the codename was later understood to be a serial code for Sony APUs not the CPU clockspeed?
 

chris 1515

Member
Oct 27, 2017
7,084
Barcelona Spain
Dreams uses Signed Distance Fields and always has done, afaik.

Point cloud is what Euclideon was harping about for close to a decade and nothing ever came of it.

In dreams SDF are part rendering using splatting of point cloud, the other part is done raymarching cube, pure point spatting gave too much holes for using it as the only rendering method...
 
Last edited:

modiz

Member
Oct 8, 2018
18,650
Cloud rendering, hmm..

My two cents but is this when MS/Sony can use their extensive global servers to provide their next gen consoles with some extra juice via the internet, such as processing power, teraflops, virtual RAM etc. to further the hardware limitations to beyond their capabilities using the power of the cloud.

Does this technology exist or did i just dream this up right now?
Point cloud rendering isnt related to "the power of the cloud" or whatever, look at gofreak's post in the last page, its just a rendering method
 
Oct 26, 2017
6,151
United Kingdom
In dreams SDF are part rendering using splatting of point cloud, the other part is done raymarching cube, pure point spatting gave too much hole for using it as the only rendering method...

I think you're thinking more about their lighting.

As I understood it---and I may have misunderstood---all if not most of their geometry simulation is done using SDFs.

The biggest issue with pure point clouds as a geometry representation for rendering is animation, i.e. its far too taxing perf-wise. It's less of an issue for static geometry tho.
 

chris 1515

Member
Oct 27, 2017
7,084
Barcelona Spain
I think you're thinking more about their lighting.

As I understood it---and I may have misunderstood---all if not most of their geometry simulation is done using SDFs.

The biggest issue with pure point clouds as a geometry representation for rendering is animation, i.e. its far too taxing perf-wise. It's less of an issue for static geometry tho.

No I speak about the geometry. Currently this is not pure point cloud. They render SDF using sphere tracing mixed with point spatting like in the SIGGRAPH 2015 presentation. Simon Brown changed the renderer because pure point splatting let too much holes.

This is what they used into this article for the non point splatting part of the rendering...


Alex Evans gave some precision about the rendering part in this video.

 
Last edited:
Feb 10, 2018
17,534
I wonder what resolution 3rd parties are targeting with there next gen games.
This gen ps4 landed around 900p/1080p, the x1 did to but with more 900p and 720p games.
This time however it seems both platforms are aiming for consoles designed mainly for gaming, maybe both will hit native 4k, even if one is slightly more powerful.
 

Deleted member 8831

Guest
I wonder what resolution 3rd parties are targeting with there next gen games.
This gen ps4 landed around 900p/1080p, the x1 did to but with more 900p and 720p games.
This time however it seems both platforms are aiming for consoles designed mainly for gaming, maybe both will hit native 4k, even if one is slightly more powerful.

I'm guessing native 1600-1800p for most who definitely try since they don't look much different from 4K. 1st Parties will easily hit 4K since they know the architecture.
 

Metalane

Member
Jun 30, 2019
777
Massachusetts, USA
Cloud rendering, hmm..

My two cents but is this when MS/Sony can use their extensive global servers to provide their next gen consoles with some extra juice via the internet, such as processing power, teraflops, virtual RAM etc. to further the hardware limitations to beyond their capabilities using the power of the cloud.

Does this technology exist or did i just dream this up right now?
This is exactly what I was thinking. Do you think consoles would still need to be the baseline if a dev wanted to use the cloud?
 

Pheonix

Banned
Dec 14, 2018
5,990
St Kitts
here you go

furthermore, you can't just take unpatched code for from a previous platform run it on a newer more powerful platform. That can literally beak the game. Some work/modifications/planning has to go into that.

You are either building the new hardware the exact same architecture with very little differences, (as was usually the case with Nintendo platforms) or you just put n the old hardware CPU into the new one too (as was with the PS2 and PS3) or you go in and patch all software of the old hardware to work on the new one (as i the case with Microsoft)
 

sncvsrtoip

Banned
Apr 18, 2019
2,773
here you go

furthermore, you can't just take unpatched code for from a previous platform run it on a newer more powerful platform. That can literally beak the game. Some work/modifications/planning has to go into that.

You are either building the new hardware the exact same architecture with very little differences, (as was usually the case with Nintendo platforms) or you just put n the old hardware CPU into the new one too (as was with the PS2 and PS3) or you go in and patch all software of the old hardware to work on the new one (as i the case with Microsoft)
Can't see anything about clock here and If I want to emulate jaguar 1.6ghz with underclocking zen I would go far below 1.6ghz
 

Thera

Banned
Feb 28, 2019
12,876
France
I wonder what resolution 3rd parties are targeting with there next gen games.
This gen ps4 landed around 900p/1080p, the x1 did to but with more 900p and 720p games.
This time however it seems both platforms are aiming for consoles designed mainly for gaming, maybe both will hit native 4k, even if one is slightly more powerful.
We will know in 3 years, when the first 3rd party next gen and PC only game will release.
 
Oct 27, 2017
4,018
Florida
I wonder what resolution 3rd parties are targeting with there next gen games.
This gen ps4 landed around 900p/1080p, the x1 did to but with more 900p and 720p games.
This time however it seems both platforms are aiming for consoles designed mainly for gaming, maybe both will hit native 4k, even if one is slightly more powerful.

This is like asking what kind of games developers are making. It's going to vary across the board. I don't think good developers go in targeting a resolution. They have a concept for a great game idea and they set out to make it the best they can and make optimizations and sacrifices where they have to to make sure the image quality is still strong.
 
Colbert's HDD vs SSD vs NVME Speed Comparison: Part 3

Deleted member 12635

User requested account closure
Banned
Oct 27, 2017
6,198
Germany
👁️‍🗨️ About SSDs Part 3

Important disclaimer:

The tests I do are synthetic, means, there is no real world application/game tested. Which is important because I am not claiming those results will materialize in the same magnitude on consoles. But what you see is an indication of improvements possible by SSD/NVMe (Best Case).

Testsystem:
Motherboard: MSI B450 Tomahawk
CPU: Ryzen 2700X @ 3700 Mhz UV Offset -0.1V
Memory: 2x16GB Vengenace LPX DDR4 @ 3000 Mhz 16-17-17-35 (CL-RCD-RP-RAS)
GPU: MSI RX Vega 56 Airboost OC watercooled by a Eiswolf GPX Pro 120 UV 975mV @ 1550 Mhz, VMEM 8GB HBM2 @ 960Mhz
Disk drives:
NVMe Samsung EVO 970 500GB (4x PCIe Gen 3)
SSD SanDisk Ultra 2TB (SATA3)
HDD Seagate Barracuda 4TB 7200rpm (SATA3)

After my testing I made in the last OTs I noticed I own another tool (AIDA64) where I can benchmark disk drives in my system.
One interesting test I was not able to do until now was the average access time of each disk type.
So I ran all the necessary tests with AIDA64 to get those information.
Here are the results:

Results:
Average Read Access Times:
  • HDD: 16.1 ms
  • SSD: 0.09 ms
  • NVMe: 0.05ms
40Fu89k.png


Analysis:
  1. NVMe has 322 times faster access times than the tested HDD (that HDD should be still better than what you get with any current gen console)
  2. SSD has 179 times faster access times than the tested HDD
  3. NVM2 has 1.8 times faster access times than the tested SSD
Conclusion:
The results are hardening my impression that SSDs provide already a massive jump in reducing latency!

FAQ:
Q: What do those numbers tell me when it comes to consoles?
A: For games you want to improve latency (see above) and read speed with different access patterns (see below Tests 1 & 2).
  • Sequential Access Pattern: Initial load of a game, Reload of big levels. In such a scenario NVMe play out their biggest advantage which is speed (see Test 1 and 2).
  • Random Access Pattern: Streaming when in-game between levels or in a level where you have fast movement in a given terrain. The bigger the block size your storage is organized in, the bigger the advantage of NVMes over SSDs (see Test 2)

Edit:
Maybe it is a better idea to re-post my other tests here ... (edited out the links)

About SSDs Part 1

In the below picture you see 3 benchmarks for each type of storage you can use in a console except Intel Optane. The tests were performed on my own PC. While the sizes of the mediums differ it will give you still an idea what would be a possible realistic speed estimation.

I only talk about reading operations!

pNA6PPm.png


If someone is not familiar with this kind of benchmark I now do some explaining what the 4 tests are about
The first test is a sequential read of 1GB of data with 32 I/O queues by 1 thread (not a valid use case for streaming data to a game, but initial loading)
The second test is a random reads of 4KB data objects from a 1GB data file with 8 I/O queues done by 8 threads (not likely in a console because of the 8 threads)
The third test is random reads of 4KB data objects from a 1GB data file with 32 I/O queues done by 1 thread (a typical console use case)
The fourth test is random reads of 4KB data objects from a 1GB data file with 1 I/O queue done by 1 thread (a worst case scenario)

Analysis:

A single non-RAID HDD is 50 times worse than any decent SATA3 SSD in random access data read for everything other than sequential read which is not your typical in game streaming use case. So any SSD will be already a huge jump in streaming-to-game capabilities even not optimized! And that is just the worst case scenario. test pattern 3 looks even better where we see 100 times the performance. I repeat: 100 times the performance on the most common access pattern you will find on any system.
test pattern 1 is showing a 6.4 times increase in speed in favor for the NVMe. The open question is how often you will see that pattern with your games other than initial loading or copying/moving data from drive a to drive b ...
In test pattern 2 you see a difference by still 4.35 times but I ask if this a valid use case for consoles because you run 8 dedicated threads while you have to maintain your frame rate. Maybe someone actually have deeper knowledge there can shed some led on it....
In test pattern 3 the NVMe advantage is reduced to a merely 37% on a pattern which is normally your bread and butter access pattern on a PC and maybe on a console too.
In test pattern 4 which is the worst case scenario my NVMe is just 22.5% better than a SATA3 SSD.

Conclusion:

I am aware that a real world access pattern would be a mix of those tested access patterns but the differences in speed between PC SSDs is not as high if you leave the roam of best case scenarios. But there is also a high chance that a next gen console game optimized for SSD speeds would actually target nearer to the worst case than to the best case to be able to be on-premise almost 100%.

About SSDs Part 2

My first speed test for drives wasn't addressing bigger data block sizes (less blocks in an allocation table) I was looking for a tool that was able to allow me to alter those block sizes for the speed tests.

I now was able to run a couple of tests with the same NVMe, SSD and HDD with different block size parameters and you can see the results in the below picture:

Testmethod:
Random Reads with 1 thread and 1 queue from a 1GB file!

n0Ow5nV.png


You can see that a normal SSD stagnates way earlier in its speed and a SSD is 2.4 times faster than my HDD at a maximum.
Different story with a NVMe. There is already a triple the performance beginning with a 32KB block size (3.6 times faster than my HDD). This goes up to 12.7 times faster than my HDD at a 2MB block size.

While the very big block sizes are good for the speed of the system, the bigger they are the more space will allocated for smaller files. Which means you need to find a balance between wasting storage space and speed.

Conclusion:
To reach gains like showed in the PS5 demo you will probably need a storage implementation around 2000 MB/s. For me personally I would see the sweet spot of a data block at 256KB without knowing the typical files sizes on a console.
 
Last edited:

Metalane

Member
Jun 30, 2019
777
Massachusetts, USA
👁️‍🗨️ About SSDs.
After my testing I made in the last OTs I noticed I own another tool (AIDA64) where I can benchmark disk drives in my system.
One interesting test I was not able to do until now was the average access time of each disk type.
So I ran all the necessary tests with AIDA64 to get those information.
Here are the results:

Results:
Average Read Access Times:
  • HDD: 16.1 ms
  • SSD: 0.09 ms
  • NVMe: 0.05ms
inhrLNP.png


Analysis:
  1. NVMe has 322 times faster access times than the tested HDD (that HDD should be still better than what you get with any current gen console)
  2. SSD has 179 times faster access times than the tested HDD
  3. NVM2 has 1.8 times faster access times than the tested SSD
Conclusion:
The results are hardening my impression that SSDs provide already a massive jump in reducing latency!

As always my setup is:
Motherboard: MSI B450 Tomahawk
CPU: Ryzen 2700X @ 3700 Mhz UV Offset -0.1V
Memory: 2x16GB Vengenace LPX @3000 Mhz 16-17-17-35 (CL-RCD-RP-RAS)
NVMe Samsung EVO 970 500GB (4x PCIe Gen 3)
SSD SanDisk Ultra 2TB (SATA3)
HDD Seagate Barracuda 4TB 7200rpm (SATA3)
I love triple digit numbers! This has to be the biggest leap in storage right? Except maybe for the N64.
 

BreakAtmo

Member
Nov 12, 2017
13,893
Australia
👁️‍🗨️ About SSDs.
After my testing I made in the last OTs I noticed I own another tool (AIDA64) where I can benchmark disk drives in my system.
One interesting test I was not able to do until now was the average access time of each disk type.
So I ran all the necessary tests with AIDA64 to get those information.
Here are the results:

Results:
Average Read Access Times:
  • HDD: 16.1 ms
  • SSD: 0.09 ms
  • NVMe: 0.05ms
inhrLNP.png


Analysis:
  1. NVMe has 322 times faster access times than the tested HDD (that HDD should be still better than what you get with any current gen console)
  2. SSD has 179 times faster access times than the tested HDD
  3. NVM2 has 1.8 times faster access times than the tested SSD
Conclusion:
The results are hardening my impression that SSDs provide already a massive jump in reducing latency!

As always my setup is:
Motherboard: MSI B450 Tomahawk
CPU: Ryzen 2700X @ 3700 Mhz UV Offset -0.1V
Memory: 2x16GB Vengenace LPX @3000 Mhz 16-17-17-35 (CL-RCD-RP-RAS)
NVMe Samsung EVO 970 500GB (4x PCIe Gen 3)
SSD SanDisk Ultra 2TB (SATA3)
HDD Seagate Barracuda 4TB 7200rpm (SATA3)

Something I want to know - would the difference between the SSD and the NVMe be very noticeable? On the one hand the reduction from NVMe to SSD is very small compared to the reduction from HDD to SSD, but on the other hand the NVMe has half the latency of the SSD. I can't tell which is the better way of looking at it.
 

Deleted member 12635

User requested account closure
Banned
Oct 27, 2017
6,198
Germany
Something I want to know - would the difference between the SSD and the NVMe be very noticeable? On the one hand the reduction from NVMe to SSD is very small compared to the reduction from HDD to SSD, but on the other hand the NVMe has half the latency of the SSD. I can't tell which is the better way of looking at it.
The less latency you can achieve the better.
But this particular test is just one part of a more complex picture.
For a better understanding I recommend to visit my other tests I did:
Speed Comparison Part 1
Speed Comparison Part 2
 

Metalane

Member
Jun 30, 2019
777
Massachusetts, USA
I have updated my comment and included all the other Test instead of just linking them.
I'm sorry about this but I'm still not that educated on SSD's. So what exactly is the most prominent: Read time, write time, or access time for an SSD? Which would devs most likely care about the most? If you would average the speeds out how big of a difference would the overall performance be compared to this gen?

Again I'm sorry for asking so much I know you did a lot of work already but I'm just really hype about these advancements.
 
Feb 10, 2018
17,534
This is like asking what kind of games developers are making. It's going to vary across the board. I don't think good developers go in targeting a resolution. They have a concept for a great game idea and they set out to make it the best they can and make optimizations and sacrifices where they have to to make sure the image quality is still strong.

Let me rephrase, I wonder what the performance it would take to render at native 4k with a mixture of medium, high and ultra settings.

Devs could of started development on a ryzen 2700 and a vega 56, in that case its likely both consoles will likely have quite a bit of headroom.
 

Deleted member 12635

User requested account closure
Banned
Oct 27, 2017
6,198
Germany
I'm sorry about this but I'm still not that educated on SSD's. So what exactly is the most prominent: Read time, write time, or access time for an SSD? Which would devs most likely care about the most? If you would average the speeds out how big of a difference would the overall performance be compared to this gen?

Again I'm sorry for asking so much I know you did a lot of work already but I'm just really hype about these advancements.
For consoles it is latency (access time addressed by Test 3) and read speed with different access patterns (addressed by Tests 1 & 2).
  • Sequential Access Pattern: Initial load of a game, Reload of big levels. In such a scenario NVMe play out their biggest advantage which is speed (see Test 1 and 2).
  • Random Access Pattern: Streaming when in-game for example between levels or in a level where you have fast movement in a given terrain. The bigger the block size your storage is organized in, the bigger the advantage of NVMes over SSDs (see Test 2)
WRITE is usually only used for storing the game and save file operations. WRITE is also not critical when it comes to latency imo.
 
Last edited:

Lady Gaia

Member
Oct 27, 2017
2,609
Seattle
I'm sorry about this but I'm still not that educated on SSD's. So what exactly is the most prominent: Read time, write time, or access time for an SSD? Which would devs most likely care about the most?

They're important for different reasons, but developers are likely to have reasons to be excited about every aspect. Trying to rank them is a little like wondering whether you want more pixel or polygon throughput when the only reasonable answer is "both."

Fast reads will enable rapid, seamless transitions between areas, as well as encouraging greater diversity of environment and character models. Contextual audio beyond anything we're used to is also an intriguing possibility. It opens up a lot of fundamental game design evolution in addition to just cutting initial load times.

Low access times will reduce the need for asset duplication, reducing upward pressure on the size of games. It could also enable a game to respond by loading unique animations and audio fast enough to eliminate the need to keep a lot in memory - just in case it's needed. It will help soften the blow of not getting a ton more memory this generation. Consider cases like a sports title being able to access tens of gigabytes of play-by-play dialogue the instant it's needed, helping to reduce the number of times you hear the exact same quip.

Fast writes could enable persistent worlds in a manner we've never seen before. I expect this will be the least used capability initially, but creative use could prove intriguing in its own right as the generation matures.
 

Deleted member 12635

User requested account closure
Banned
Oct 27, 2017
6,198
Germany
They're important for different reasons, but developers are likely to have reasons to be excited about every aspect. Trying to rank them is a little like wondering whether you want more pixel or polygon throughput when the only reasonable answer is "both."

Fast reads will enable rapid, seamless transitions between areas, as well as encouraging greater diversity of environment and character models. Contextual audio beyond anything we're used to is also an intriguing possibility. It opens up a lot of fundamental game design evolution in addition to just cutting initial load times.

Low access times will reduce the need for asset duplication, reducing upward pressure on the size of games. It could also enable a game to respond by loading unique animations and audio fast enough to eliminate the need to keep a lot in memory - just in case it's needed. It will help soften the blow of not getting a ton more memory this generation. Consider cases like a sports title being able to access tens of gigabytes of play-by-play dialogue the instant it's needed, helping to reduce the number of times you hear the exact same quip.

Fast writes could enable persistent worlds in a manner we've never seen before. I expect this will be the least used capability initially, but creative use could prove intriguing in its own right as the generation matures.
I think WRITE is not as latency or speed critical in-game as READ. For WRITE you can easily queue/buffer and write in a background task with a lower thread priority.
 

AegonSnake

Banned
Oct 25, 2017
9,566
*The implications of lower CPU speed of Gonzalo means the Firestrike 20000+ score was not achieved by CPU performing like desktop 3700X clocked at 3.2GHz, but rather by high performance GPU having much higher bandwidth than Navi 5700XT and super fast SSD. So 10+ TF gpu is not entirely impossible.
Yep. this is all very exciting.

Now all we gotta do is find a firestrike score of 20k for a Zen 1 AMD CPU with 8MB L3 cache. We know the Zen2 IPC increase in 15% according to AMD themselves. So we should be looking at a Zen 1 8 core 16 thread 3.68 ghz cpu. Thats a ryzen 2700. I have looked at several firestrike scores for that CPU with 5700XT and they all seem to under 20k. Most in the 17k range.


this one guy had managed to overclock his 5700XT to 2.6 ghz and still only managed a 17.5k score because the clockspeed on his CPU was only 3.5 ghz. thats a 13 tflops GPU.
 

Metalane

Member
Jun 30, 2019
777
Massachusetts, USA
For consoles it is latency (access time addressed by Test 3) and read speed with different access patterns (addressed by Tests 1 & 2).
  • Sequential Access Pattern: Initial load of a game, Reload of big levels. In such a scenario NVMe play out their biggest advantage which is speed (see Test 1 and 2).
  • Random Access Pattern: Streaming when in-game for example between levels or in a level where you have fast movement in a given terrain. The bigger the block size your storage is organized in, the bigger the advantage of NVMes over SSDs (see Test 2)
WRITE is usually only used for storing the game and save file operations. WRITE is also not critical when it comes to latency imo.
Thanks so much. I'll have to read your tests again.
They're important for different reasons, but developers are likely to have reasons to be excited about every aspect. Trying to rank them is a little like wondering whether you want more pixel or polygon throughput when the only reasonable answer is "both."

Fast reads will enable rapid, seamless transitions between areas, as well as encouraging greater diversity of environment and character models. Contextual audio beyond anything we're used to is also an intriguing possibility. It opens up a lot of fundamental game design evolution in addition to just cutting initial load times.

Low access times will reduce the need for asset duplication, reducing upward pressure on the size of games. It could also enable a game to respond by loading unique animations and audio fast enough to eliminate the need to keep a lot in memory - just in case it's needed. It will help soften the blow of not getting a ton more memory this generation. Consider cases like a sports title being able to access tens of gigabytes of play-by-play dialogue the instant it's needed, helping to reduce the number of times you hear the exact same quip.

Fast writes could enable persistent worlds in a manner we've never seen before. I expect this will be the least used capability initially, but creative use could prove intriguing in its own right as the generation matures.
Thank you as well. So when we talk about GB/s do we mean write or read speeds? And does this mean overall the SSD in the consoles next gen are actually hundreds of times faster than the ones we have now (depending on the function of course)?

Edit: The first question was referring to MS's comment of "40x faster". Why would MS say that if other functions are clearly an even bigger improvement.
 
Oct 26, 2017
6,151
United Kingdom
No I speak about the geometry. Currently this is not pure point cloud. They render SDF using sphere tracing mixed with point spatting like in the SIGGRAPH 2015 presentation. Simon Brown changed the renderer because pure point splatting let too much holes.

This is what they used into this article for the non point splatting part of the rendering...


Alex Evans gave some precision about the rendering part in this video.



Well this I didn't know.

Quite fascinating. Thank you kindly for the links, good sir.

I will read/watch with keen interest.
 

AegonSnake

Banned
Oct 25, 2017
9,566
So it seems the chip Gonzalo has 8MB of L3 cache? Interesting, this would give more credential to that leak in the end:

that leak doesnt talk in tflops and ghz and makes zero sense to me lol.

  • monolithic die ~22.4mm by ~14.1mm (315 mm2 APU size?)
  • 16 Samsung K4ZAF325BM-HC18 in clamshell configuration (Is this RAM? How much? HBM2 or GDDR6?)
  • memory vrm seems like overkill with multiple Fairchild/ON Semiconductor FDMF3170 power stages controlled by an MP2888 from MPS (what the hell does this mean?)
  • 3 Samsung K4AAG085WB-MCRC, 2 of those close to the NAND acting as DRAM cache (unusual 2GB DRAM per 1 TB NAND) (If this is also SSD then why the need for 4 NAND packages below?)
  • 4 NAND packages soldered to the PCB which are TH58LJT2T24BAEG from Toshiba (I am guessing these are 256GB SSDs??)
  • PS5016-E16 from Phison (This seems to be the SSD controller)

Can anyone please go through this and explain what this means?

The only thing i can figure out is the 315mm2 APU which makes more sense now that we know the CPU is much smaller. This looks like a 40 CU part. Maybe 44 with 4 disabled? But if thats the case then how can it put more powerful than the Xbox Scarlett APU which is between 380-400mm2. 7nm+ gives you 15% die size reduction max so that gives us a 7nm APU size of 362 mm2. That could be a 56 CU chip. But MS should be able to have more than 56 CUs in their 380mm2 die. So the rumors dont add up even if we assume the highly unlikely scenario that the PS5 is on 7nm+ and Xbox Scarlett isnt.
 

modiz

Member
Oct 8, 2018
18,650
So it seems the chip Gonzalo has 8MB of L3 cache? Interesting, this would give more credential to that leak in the end:

wait a minute:
"16 Samsung K4ZAF325BM-HC18 in clamshell configuration "
wouldnt AMD Flute's memory layout match with this?
that leak doesnt talk in tflops and ghz and makes zero sense to me lol.

  • monolithic die ~22.4mm by ~14.1mm (315 mm2 APU size?)
  • 16 Samsung K4ZAF325BM-HC18 in clamshell configuration (Is this RAM? How much? HBM2 or GDDR6?)
  • memory vrm seems like overkill with multiple Fairchild/ON Semiconductor FDMF3170 power stages controlled by an MP2888 from MPS (what the hell does this mean?)
  • 3 Samsung K4AAG085WB-MCRC, 2 of those close to the NAND acting as DRAM cache (unusual 2GB DRAM per 1 TB NAND) (If this is also SSD then why the need for 4 NAND packages below?)
  • 4 NAND packages soldered to the PCB which are TH58LJT2T24BAEG from Toshiba (I am guessing these are 256GB SSDs??)
  • PS5016-E16 from Phison (This seems to be the SSD controller)

Can anyone please go through this and explain what this means?

The only thing i can figure out is the 315mm2 APU which makes more sense now that we know the CPU is much smaller. This looks like a 40 CU part. Maybe 44 with 4 disabled? But if thats the case then how can it put more powerful than the Xbox Scarlett APU which is between 380-400mm2. 7nm+ gives you 15% die size reduction max so that gives us a 7nm APU size of 362 mm2. That could be a 56 CU chip. But MS should be able to have more than 56 CUs in their 380mm2 die. So the rumors dont add up even if we assume the highly unlikely scenario that the PS5 is on 7nm+ and Xbox Scarlett isnt.
could always be that the scarlett video wasnt indicative of the final product.
and in regards to the parts:
16 GDDR6 chips? 18gbps
not sure about the second point
3rd point is 3 chips of 2GB DDR4 memory, 2 used for the SSD caching, but this would contradict one of the points in the patent the mentioned using SRAM for the SSD instead of DRAM, of course the patents might not end up used.
from what i am finding regarding the 4th point, this is a 500GB SSD storage, which means 2TB SSD total.
 
Last edited:

Deleted member 12635

User requested account closure
Banned
Oct 27, 2017
6,198
Germany
Something to remember about people that "leak" about next gen PCBs and used chips (based on AFAIK and IIRC about dev kits from MS):

a) Xbox dev kits feature an chassis intrusion detection system
b) Xbox dev kits need to be connected to the MS Partner network to function properly for game development
c) The intrusion detection system sends an event to MS if you open the dev-kit.
d) (not experienced, but assumed) the moment MS get notified about such an incident (chassis intrusion), the dev-kit gets blocked and the owner of the dev kit (the person that is linked to the dev kit) get investigated.

So, to my knowledge, you will not get that kind of information from a developer! I also doubt that a HW engineer working on the product would risk his job to leak such stuff. All that is left is maybe someone at the manufacturer of the boards.
 
Last edited:

modiz

Member
Oct 8, 2018
18,650
So is flute the next iteration of Gonzalo or is it Xbox as Tom's hardware seems to think?
i dont see any reason to think its not the next iteration gonzalo right now, all the clock speeds match perfectly to gonzalo, so unless MS and Sony are going to end up with identical specs in both the CPU and the GPU, this is Gonzalo.
 

Ushay

Member
Oct 27, 2017
8,698
👁️‍🗨️ About SSDs Part 3

Testsystem:
Motherboard: MSI B450 Tomahawk
CPU: Ryzen 2700X @ 3700 Mhz UV Offset -0.1V
Memory: 2x16GB Vengenace LPX DDR4 @ 3000 Mhz 16-17-17-35 (CL-RCD-RP-RAS)
GPU: MSI RX Vega 56 Airboost OC watercooled by a Eiswolf GPX Pro 120 UV 975mV @ 1550 Mhz, VMEM 8GB HBM2 @ 960Mhz
Disk drives:
NVMe Samsung EVO 970 500GB (4x PCIe Gen 3)
SSD SanDisk Ultra 2TB (SATA3)
HDD Seagate Barracuda 4TB 7200rpm (SATA3)

After my testing I made in the last OTs I noticed I own another tool (AIDA64) where I can benchmark disk drives in my system.
One interesting test I was not able to do until now was the average access time of each disk type.
So I ran all the necessary tests with AIDA64 to get those information.
Here are the results:

Results:
Average Read Access Times:
  • HDD: 16.1 ms
  • SSD: 0.09 ms
  • NVMe: 0.05ms
40Fu89k.png


Analysis:
  1. NVMe has 322 times faster access times than the tested HDD (that HDD should be still better than what you get with any current gen console)
  2. SSD has 179 times faster access times than the tested HDD
  3. NVM2 has 1.8 times faster access times than the tested SSD
Conclusion:
The results are hardening my impression that SSDs provide already a massive jump in reducing latency!

FAQ:
Q: What do those numbers tell me when it comes to consoles?
A: For games you want to improve latency (see above) and read speed with different access patterns (see below Tests 1 & 2).
  • Sequential Access Pattern: Initial load of a game, Reload of big levels. In such a scenario NVMe play out their biggest advantage which is speed (see Test 1 and 2).
  • Random Access Pattern: Streaming when in-game between levels or in a level where you have fast movement in a given terrain. The bigger the block size your storage is organized in, the bigger the advantage of NVMes over SSDs (see Test 2)

Edit:
Maybe it is a better idea to re-post my other tests here ... (edited out the links)

About SSDs Part 1

In the below picture you see 3 benchmarks for each type of storage you can use in a console except Intel Optane. The tests were performed on my own PC. While the sizes of the mediums differ it will give you still an idea what would be a possible realistic speed estimation.

I only talk about reading operations!

pNA6PPm.png


If someone is not familiar with this kind of benchmark I now do some explaining what the 4 tests are about
The first test is a sequential read of 1GB of data with 32 I/O queues by 1 thread (not a valid use case for streaming data to a game, but initial loading)
The second test is a random reads of 4KB data objects from a 1GB data file with 8 I/O queues done by 8 threads (not likely in a console because of the 8 threads)
The third test is random reads of 4KB data objects from a 1GB data file with 32 I/O queues done by 1 thread (a typical console use case)
The fourth test is random reads of 4KB data objects from a 1GB data file with 1 I/O queue done by 1 thread (a worst case scenario)

Analysis:

A single non-RAID HDD is 50 times worse than any decent SATA3 SSD in random access data read for everything other than sequential read which is not your typical in game streaming use case. So any SSD will be already a huge jump in streaming-to-game capabilities even not optimized! And that is just the worst case scenario. test pattern 3 looks even better where we see 100 times the performance. I repeat: 100 times the performance on the most common access pattern you will find on any system.
test pattern 1 is showing a 6.4 times increase in speed in favor for the NVMe. The open question is how often you will see that pattern with your games other than initial loading or copying/moving data from drive a to drive b ...
In test pattern 2 you see a difference by still 4.35 times but I ask if this a valid use case for consoles because you run 8 dedicated threads while you have to maintain your frame rate. Maybe someone actually have deeper knowledge there can shed some led on it....
In test pattern 3 the NVMe advantage is reduced to a merely 37% on a pattern which is normally your bread and butter access pattern on a PC and maybe on a console too.
In test pattern 4 which is the worst case scenario my NVMe is just 22.5% better than a SATA3 SSD.

Conclusion:

I am aware that a real world access pattern would be a mix of those tested access patterns but the differences in speed between PC SSDs is not as high if you leave the roam of best case scenarios. But there is also a high chance that a next gen console game optimized for SSD speeds would actually target nearer to the worst case than to the best case to be able to be on-premise almost 100%.

About SSDs Part 2

My first speed test for drives wasn't addressing bigger data block sizes (less blocks in an allocation table) I was looking for a tool that was able to allow me to alter those block sizes for the speed tests.

I now was able to run a couple of tests with the same NVMe, SSD and HDD with different block size parameters and you can see the results in the below picture:

Testmethod:
Random Reads with 1 thread and 1 queue from a 1GB file!

n0Ow5nV.png


You can see that a normal SSD stagnates way earlier in its speed and a SSD is 2.4 times faster than my HDD at a maximum.
Different story with a NVMe. There is already a triple the performance beginning with a 32KB block size (3.6 times faster than my HDD). This goes up to 12.7 times faster than my HDD at a 2MB block size.

While the very big block sizes are good for the speed of the system, the bigger they are the more space will allocated for smaller files. Which means you need to find a balance between wasting storage space and speed.

Conclusion:
To reach gains like showed in the PS5 demo you will probably need a storage implementation around 2000 MB/s. For me personally I would see the sweet spot of a data block at 256KB without knowing the typical files sizes on a console.

Nice writeup Colbert! And good lord, look at those read//write speeds.
NVMe and CPU are the real winners next gen. Amazing.
 

Lady Gaia

Member
Oct 27, 2017
2,609
Seattle
Thank you as well. So when we talk about GB/s do we mean write or read speeds?

If it's not specified, chances are it's read. Both can be measured in GB/s and they're almost invariably different.

And does this mean overall the SSD in the consoles next gen are actually hundreds of times faster than the ones we have now (depending on the function of course)?

For some operations, yes, but developers are generally going to design away from random-access reads because they perform so poorly. Especially on physical media. So the situations where you'll see that much speed improvement are rare because everyone has been avoiding them already.

Edit: The first question was referring to MS's comment of "40x faster". Why would MS say that if other functions are clearly an even bigger improvement.

We have no idea what Microsoft's solution really looks like, so it's hard to speculate, but most people focus on sequential read rates because that's what yields the biggest raw numbers (people are nothing if not predictable.) So that's where the comparison is most likely to be drawn.
 

Nachtmaer

Member
Oct 27, 2017
348
that leak doesnt talk in tflops and ghz and makes zero sense to me lol.

  • monolithic die ~22.4mm by ~14.1mm (315 mm2 APU size?)
  • 16 Samsung K4ZAF325BM-HC18 in clamshell configuration (Is this RAM? How much? HBM2 or GDDR6?)
  • memory vrm seems like overkill with multiple Fairchild/ON Semiconductor FDMF3170 power stages controlled by an MP2888 from MPS (what the hell does this mean?)
  • 3 Samsung K4AAG085WB-MCRC, 2 of those close to the NAND acting as DRAM cache (unusual 2GB DRAM per 1 TB NAND) (If this is also SSD then why the need for 4 NAND packages below?)
  • 4 NAND packages soldered to the PCB which are TH58LJT2T24BAEG from Toshiba (I am guessing these are 256GB SSDs??)
  • PS5016-E16 from Phison (This seems to be the SSD controller)

Can anyone please go through this and explain what this means?

The only thing i can figure out is the 315mm2 APU which makes more sense now that we know the CPU is much smaller. This looks like a 40 CU part. Maybe 44 with 4 disabled? But if thats the case then how can it put more powerful than the Xbox Scarlett APU which is between 380-400mm2. 7nm+ gives you 15% die size reduction max so that gives us a 7nm APU size of 362 mm2. That could be a 56 CU chip. But MS should be able to have more than 56 CUs in their 380mm2 die. So the rumors dont add up even if we assume the highly unlikely scenario that the PS5 is on 7nm+ and Xbox Scarlett isnt.
It's just a list of the part numbers of stuff you could find on the PCB. You can throw these into Google to find their specs.

- SoC size
- 16 16Gb GDDR6 chips, so 32GB in total
- voltage regulators and controller for power delivery
- 3 16Gb DDR4 chips, SSDs tend to have DDR onboard nowadays
- NAND chips are flash/SSD storage chips and then there's the controller for it

This "leak" still seems very weird to me. It's oddly specific, but anyone who has some basic understanding of this stuff could just pull it out of their ass. It'd also mean that at least they have near final silicon. At least close enough to production quality that this goes into actual boards and is leaving AMD/Sony's labs. For a late 2020 launch, this seems way too early.
 

sncvsrtoip

Banned
Apr 18, 2019
2,773
that leak doesnt talk in tflops and ghz and makes zero sense to me lol.

  • monolithic die ~22.4mm by ~14.1mm (315 mm2 APU size?)
  • 16 Samsung K4ZAF325BM-HC18 in clamshell configuration (Is this RAM? How much? HBM2 or GDDR6?)
  • memory vrm seems like overkill with multiple Fairchild/ON Semiconductor FDMF3170 power stages controlled by an MP2888 from MPS (what the hell does this mean?)
  • 3 Samsung K4AAG085WB-MCRC, 2 of those close to the NAND acting as DRAM cache (unusual 2GB DRAM per 1 TB NAND) (If this is also SSD then why the need for 4 NAND packages below?)
  • 4 NAND packages soldered to the PCB which are TH58LJT2T24BAEG from Toshiba (I am guessing these are 256GB SSDs??)
  • PS5016-E16 from Phison (This seems to be the SSD controller)

Can anyone please go through this and explain what this means?

The only thing i can figure out is the 315mm2 APU which makes more sense now that we know the CPU is much smaller. This looks like a 40 CU part. Maybe 44 with 4 disabled? But if thats the case then how can it put more powerful than the Xbox Scarlett APU which is between 380-400mm2. 7nm+ gives you 15% die size reduction max so that gives us a 7nm APU size of 362 mm2. That could be a 56 CU chip. But MS should be able to have more than 56 CUs in their 380mm2 die. So the rumors dont add up even if we assume the highly unlikely scenario that the PS5 is on 7nm+ and Xbox Scarlett isnt.
16 Samsung K4ZAF325BM-HC18 in clamshell configuration (Is this RAM? How much? HBM2 or GDDR6?)
16x2GB (so 32GB) 18Gbps gddr6

3 Samsung K4AAG085WB-MCRC, 2 of those close to the NAND acting as DRAM cache (unusual 2GB DRAM per 1 TB NAND) (If this is also SSD then why the need for 4 NAND packages below?)
this is 3x 2GB ddr4, 2gb as cache for 1tb nand (ant there is 2tb nand)

4 NAND packages soldered to the PCB which are TH58LJT2T24BAEG from Toshiba (I am guessing these are 256GB SSDs??)
nope, memory is 2 TB memory TLC 3D NAND

PS5016-E16 from Phison (This seems to be the SSD controller)
yes, PCI-Express 4.0
 

AegonSnake

Banned
Oct 25, 2017
9,566
wait a minute:
"16 Samsung K4ZAF325BM-HC18 in clamshell configuration "
wouldnt AMD Flute's memory layout match with this?

could always be that the scarlett video wasnt indicative of the final product.
and in regards to the parts:
16 GDDR6 chips? 18gbps
not sure about the second point
3rd point is 3 chips of 2GB DDR4 memory, 2 used for the SSD caching, but this would contradict one of the points in the patent the mentioned using SRAM for the SSD instead of DRAM, of course the patents might not end up used.
from what i am finding regarding the 4th point, this is a 500GB SSD storage, which means 2TB SSD total.
So 16 gddr6 chips mean they could have 2GB in some and 1 in others, right? So we could have anywhere between 16-32 GBs in RAM? Can we calculate the bandwidth from this?

So in addition to 2TB of SSD, they are using almost 6GBs of DDR4 as a middle layer between the SSD and VRAM? Why? This thing seems way too overengineered and expensive.

Like why spend all this money on SSD and RAM when you are going to get your ass kicked when MS goes with a powerful GPU and a massive die?



So is flute the next iteration of Gonzalo or is it Xbox as Tom's hardware seems to think?
I initially thought Flute appearing now means that this is MS's revision after learning Sony was able to push the clocks to 1.8 ghz. But now im not sure. I thought the last gonzolo leak was the qualifying sample and that was going to be the last one we saw before the final silicon. so why would we see another one for Sony?
 

Metalane

Member
Jun 30, 2019
777
Massachusetts, USA
If it's not specified, chances are it's read. Both can be measured in GB/s and they're almost invariably different.



For some operations, yes, but developers are generally going to design away from random-access reads because they perform so poorly. Especially on physical media. So the situations where you'll see that much speed improvement are rare because everyone has been avoiding them already.



We have no idea what Microsoft's solution really looks like, so it's hard to speculate, but most people focus on sequential read rates because that's what yields the biggest raw numbers (people are nothing if not predictable.) So that's where the comparison is most likely to be drawn.
Thanks! Great write up. Wasn't their a recent Sony patent that described a different way of handling the data? I'll see if I can find it.

Edit: "chris 1515" was talking about it a few pages back.

The SSD describe in the Sony patent some software and hardware technology are used to keep the SSD speed in sequential and random access the same. It goes even faster than the speed you describe. There is only one case where this is killing the system in the patent, this 4kib files if it is needed to stream a big quantity of 4 KiB file, this is better to organize them in one big file and it will be read parallelly. Random access performance is very important for SSD knowing wear-leveling trying to limit the write on the same physical address for improving SSD life...



The patent describes current SSD speed is bottlenecked by file system conceive for HDD, CPU, and controller.



Sony is not the only one to change the technology to use SSD differently, there is a standard in a datacenter for improving performance and durability of SSD and tailor the SSD by application type. All the controller and internal SSD management is done by CPU and use main RAM.

Introduction - Open-Channel Solid State Drives

None
——————————————————————————————————————————————————————————————————————————

The tests I did are synthetic, means there is no real world application/game behind it. Which is important because I am not claiming those results will materialize in the same magnitude on consoles. But what you see is the significant advantage SSD/NVMe can bring to the table.

That said, I assume the "40x faster" comment by MS is based on performance tests with a couple of games / prototypes and not on synthetic tests like I did. In that sense they are more realistic than any synthetic test can be.
I see, thanks for clearing it up.
 
Last edited:
Status
Not open for further replies.