• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.

bcatwilly

Member
Oct 27, 2017
2,483
Come on dawg. That's pretty tenuous at best. There are plenty of ways to get around what the camera see at that moment. You only need to load what the camera can see in a given time frame. Even if you set it so that you can turn the camera 360 in less than 2 seconds (yeah, nah), you won;t be traversing the space fast enough that you can't do the full flush in 2.8 seconds instead of 1.5 seconds.

Also, if you need 13.5GB of assets to draw what camera can see during a 360 degree camera spin, you doing something seriously fucking wrong and should not be making games! lolz

LOL, this.
 

AntiMacro

Member
Oct 27, 2017
3,136
Alberta
Say if a character is stuck in a nightmare, and literally, every time they turn around, they are in new environment. To make the same scenario work with half speed IO, you'd have to use more of RAM to preload the next scene- which would limit your flexibility with what the next scene could be and the fidelity of the assets of your current scene .
Ever played Layers of Fear? It does this...on current gen.
 

Trup1aya

Literally a train safety expert
Member
Oct 25, 2017
21,330
Or add a 1.5 second transition animation before or after the turn?

That's a compromise, especially if the brevity of the transitions is core to the encounters design

This type of analysis is pretty much laughable right now for a couple of reasons. First and foremost it isn't even really logical that any game is going to need to be completely refilling the entire video memory at that type of rate when barely moving within the game world, and secondly Xbox has talked very plainly about being able to use the SSD as "virtual RAM" and having access to 100GB instantly for assets and such which should more than address any reasonable gaming scenario here.

I think this type of response is laughable because, obviously, it doesn't have to be refilling the entire RAM to make a difference, just much more of the RAM that other systems are capable of.

Come on dawg. That's pretty tenuous at best. There are plenty of ways to get around what the camera see at that moment. You only need to load what the camera can see in a given time frame. Even if you set it so that you can turn the camera 360 in less than 2 seconds (yeah, nah), you won;t be traversing the space fast enough that you can't do the full flush in 2.8 seconds instead of 1.5 seconds.

Also, if you need 13.5GB of assets just to draw what camera can see during a 360 degree camera spin, you doing something seriously fucking wrong and should not be making games! lolz

You don't just need to load what the camera sees, you have to load what the camera might see in the near future. It's not about needing 13.5 gigs of assets to draw a scene, it's about having immediate access to a bunch of high quality assets.


Ever played Layers of Fear? It does this...on current gen.

No I've never played it, but I assume its design is subject to current gen limitations that a ps5 game wouldn't be. I think these comments are funny because you have to ignore the scale and fidelity enabled by having better hardware as well as ignore disparity in design goals to believe the comparison makes any sense . It's like when people say "having flying mounts in Horizon: Zero Dawn is nothing... we could fly in GTA: San Andreas."
 

Axel Stone

Member
Jan 10, 2020
2,771
That's a compromise, especially if the brevity of the transitions is core to the encounters design

It is, but it's a pretty mild compromise, and it's not like Sony's SSD solution is suddenly going to free devs from compromising in any way shape or form, a game on PS5 will have to compromise on raytracing or resolution or frame rate in comparison to the XSX version, but with these sorts of endeavours you always have to compromise.

Plus, the person you were replying to was saying that there are no scenarios that would be impossible on PS5 vs XSX, not that there wouldn't be compromises going from one to the other.
 

Trup1aya

Literally a train safety expert
Member
Oct 25, 2017
21,330
It is, but it's a pretty mild compromise, and it's not like Sony's SSD solution is suddenly going to free devs from compromising in any way shape or form, a game on PS5 will have to compromise on raytracing or resolution or frame rate in comparison to the XSX version, but with these sorts of endeavours you always have to compromise.

Plus, the person you were replying to was saying that there are no scenarios that would be impossible on PS5 vs XSX, not that there wouldn't be compromises going from one to the other.

Game development is inherently about managing compromise. Theres no way around it. a game intricately designed around Sony's IO speed would likely deter and Xbox developer from attempting the same approach to design if the compromises hamper the experience.

I'm sure 99% percent of games would be able to mask compromises in a satisfactory way (with transitions, less detailed assets , etc) But I wouldnt bet that all could.
 
Last edited:

Gdourado

Member
Oct 1, 2018
139
Lisbon, Portugal
With MS claiming 4 generations of games on their new series X console, do you believe new 360 and OG Xbox games will be added to the backwards compatibility program?
When where the last games added?
Did MS stated anything about this?
I am wondering because I just remembered a couple of 360 games I would like to pay on my one X that are not on the program.
 

ArchedThunder

Uncle Beerus
Member
Oct 25, 2017
19,026
With MS claiming 4 generations of games on their new series X console, do you believe new 360 and OG Xbox games will be added to the backwards compatibility program?
When where the last games added?
Did MS stated anything about this?
I am wondering because I just remembered a couple of 360 games I would like to pay on my one X that are not on the program.
They will absolutely add more games to BC.
 

Deleted member 45460

User requested account closure
Banned
Jun 27, 2018
1,492
SSD;s are awesome, powerful GPU's are great, but the far better CPU in the next gen systems is going to be such an incredible upgrade. I 100% believe that Phil will prefer (he doesn't mandate as far as I know) 60 fps in all 1st party titles from now on as baseline and I LOVE IT. Hell they figured out how to do it in halo 5 and gears 5 with this awful cpu in the one/one x. I hope Sony does as well, but they have shown a real knack for stunning graphics at a more cinematic 30fps and might be happy to continue with that.
 

litebrite

Banned
Oct 27, 2017
21,832
With MS claiming 4 generations of games on their new series X console, do you believe new 360 and OG Xbox games will be added to the backwards compatibility program?
When where the last games added?
Did MS stated anything about this?
I am wondering because I just remembered a couple of 360 games I would like to pay on my one X that are not on the program.
The last games were added June of last year so the BC team could focus on everything compatible on Xbox One being compatible on Xbox Series X. They've added machine learning HDR to non HDR games on XSX, as well as evolved the Heutchy Method used to increase resolution on BC OG Xbox, and select Xbox 360 games to increase resolution on select XB1 games on XSX with Gears of War: Ultimate Edition used as an example with a resolution of 1080p on XB1 running at native 4k on XSX with no input or patches from the devs.

We don't know if MS will continue adding OG Xbox and Xbox 360 games to the BC list.
 
Mar 22, 2020
87
No I've never played it, but I assume its design is subject to current gen limitations that a ps5 game wouldn't be.
Oh yeah. Because it might not have anticipated hardware in consoles coming out someday in 2020.
I think these comments are funny because you have to ignore the scale and fidelity enabled by having better hardware as well as ignore disparity in design goals to believe the comparison makes any sense.
What are you talking about ? Nobody ignores the fact that Sony expect more of their developers from the buffier SSD and the expanded conference.
There is however a point to be made on "swapping as much of the VRAM as possible, all the time" to drive up levels of detail, if the current gen already proves unable to keep up with fidelity.
In response to that, the point - I gather - is "but the reason it cannot keep up is because of the need to store and reuse assets, and make do with the remaining memory". However part of the problem is actually rendering the scene using assets in memory will require r/w and maintaining in memory asset and results.
Compute and reuse with caching is cheaper on power, big transactions are not. It's hard to see a way to not get caught back by what we're trying to avoid.

Actually, a better (objectively and subjectively) gaming system will solve for a given performance, or given fidelity. A console will solve for a given power, thermal and a certain price. Therefore, going big on I/O makes a lot of sense when you mass produce a device, because the goal is to consume as little power as possible in everything but compute.

It achieves the same thing, but it uses a different whip.
 
Last edited:

Deleted member 11276

Account closed at user request
Banned
Oct 27, 2017
3,223
If anyone has one of these new Ryzen 7 4900HS laptops, wouldn't it be possible to cap the frequency at 3,6 GHz and then run Cinebench R20? That should give us a pretty close indication how powerful the CPU in the XSX is, no?
 

Trup1aya

Literally a train safety expert
Member
Oct 25, 2017
21,330
Oh yeah. Because it might not have anticipated hardware in consoles coming out someday in 2020.

So you agree that its totally disingenuous to compare a concept limited by last gen hardware to a novel take on said concept on next gen hardware? Or no?

What are you talking about ? Nobody ignores the fact that Sony expect more of their developers from the buffier SSD and the expanded conference.
There is however a point to be made on:
  • "swapping as much of the VRAM as possible, all the time" to drive up levels of detail, if the current gen already proves unable to keep up with fidelity.
  • I assume the point is "but the reason it cannot keep up is because of the need to store and reuse assets, and make do with the remaining memory".
  • However part of the problem is actually rendering the scene using assets in memory will require r/w and maintaining in memory asset and results.
  • Compute and reuse with caching is cheaper on power, big transactions are not. It's hard to see a way to not get caught back by what we're trying to avoid.
Actually, a better (objectively and subjectively) gaming system will solve for a given performance, or given fidelity. A console will solve for a given power, thermal and a certain price. Therefore, going big on I/O makes a lot of sense when you mass produce a device, because the goal is to consume as little power as possible in everything but compute.

It achieves the same thing, but it uses a different whip.

What point are you countering?
 
Mar 22, 2020
87
I think this type of response is laughable because, obviously, it doesn't have to be refilling the entire RAM to make a difference, just much more of the RAM that other systems are capable of.

You don't just need to load what the camera sees, you have to load what the camera might see in the near future. It's not about needing 13.5 gigs of assets to draw a scene, it's about having immediate access to a bunch of high quality assets.

[...] I think these comments are funny because you have to ignore the scale and fidelity enabled by having better hardware as well as ignore disparity in design goals to believe the comparison makes any sense . It's like when people say "having flying mounts in Horizon: Zero Dawn is nothing... we could fly in GTA: San Andreas."
These are related comments. The last part in bold is what I'm directly responding to. The last part confuses me a little, but I made out you expect graphical fidelity to be driven up / increased by faster I/O and high density storage.
You may also be making a point that higher fidelity may be achieved on systems with lower raw compute power but higher available I/O throughput.
If you make the point that there is a bigger statement from Sony than Microsoft on expectations from the new high density storage, I do agree
Edit: I modified my first post so it makes more sense
 

Trup1aya

Literally a train safety expert
Member
Oct 25, 2017
21,330
These are related comments. The last part in bold is what I'm directly responding to. The last part confuses me a little, but I made out you expect graphical fidelity to be driven up / increased by faster I/O and high density storage.
You may also be making a point that higher fidelity may be achieved on systems with lower raw compute power but higher available I/O throughput.
If you make the point that there is a bigger statement from Sony than Microsoft on expectations from the new high density storage, I do agree
Edit: I modified my first post so it makes more sense

The last point you bolded was in response to the notion that a last gen implementation of a concept inherently undermines the increase in scope and scale that becomes possible with dramatic increases in hardware capabilities.

In general, I expect first party developers to make a case that their hardware enables the creation of experiences that couldn't be had elsewhere without serious compromise. Sony doesn't have a monopoly on that, but I think IO is where they'll be making their demonstrations.

"fidelity" is a broad term. I expect XSX to be generally higher fidelity in terms of resolutions and geometry. Generally, i think XSX will have the better asset quality too. But I can envision scenarios where Sony uses their IO advantage to create scenarios where asset quality/
utilization trumps what is feasible on XSX.
 

DrKeo

Banned
Mar 3, 2019
2,600
Israel
Pascal has dot4 instructions for INT8 but no dot8 instructions for INT4 support:
https://devblogs.nvidia.com/mixed-precision-programming-cuda-8/


Well historocially it's a relatively new development that GPUs from AMD, Intel and Nvidia utilize lower precision data formats for compute.
AI is growing and so is the application field.
Software infrastructure got laid down with WinML and DirectML, there is a motivation point for HW to provide more throughput there.

You are right, that it doesn't mean that every RDNA2 GPU will now come with the instructions but the implementation costs are low and AMD already provides support on a midrange GPU series for it, so I would rather expect most or all upcoming GPUs from AMD with that capability.
You are right, it is the future, but we have no idea if RDNA 2 has it as a baseline or if PS5 has it.

With MS claiming 4 generations of games on their new series X console, do you believe new 360 and OG Xbox games will be added to the backwards compatibility program?
When where the last games added?
Did MS stated anything about this?
I am wondering because I just remembered a couple of 360 games I would like to pay on my one X that are not on the program.
I'm sure they will. They have a full BC team with nothing to do once the XSX comes out considering MS claims all X1 games run in BC mode. I mean, if all X1 games run in BC mode from day one, all the BC team can really do is add more BC games or upgrade existing BC games and both sound like a wonderful thing.
 

Scently

Member
Oct 27, 2017
1,464
You are right, it is the future, but we have no idea if RDNA 2 has it as a baseline or if PS5 has it.


I'm sure they will. They have a full BC team with nothing to do once the XSX comes out considering MS claims all X1 games run in BC mode. I mean, if all X1 games run in BC mode from day one, all the BC team can really do is add more BC games or upgrade existing BC games and both sound like a wonderful thing.
Indeed. Quite literally the only reason they stopped was to focus on XSX compatibility with the previous generation. Once that is done I expect that we will see a resumption of 360 and OG XBOX BC rollout resuming, with the occasional X enhanced 360 and X1 games. In the DF interview, they talked about exploring different methods of going about enhancing BC games. I think BC enhancement and rollout will be a continuous and ongoing thing for MS.
 

DrKeo

Banned
Mar 3, 2019
2,600
Israel
Indeed. Quite literally the only reason they stopped was to focus on XSX compatibility with the previous generation. Once that is done I expect that we will see a resumption of 360 and OG XBOX BC rollout resuming, with the occasional X enhanced 360 and X1 games. In the DF interview, they talked about exploring different methods of going about enhancing BC games. I think BC enhancement and rollout will be a continuous and ongoing thing for MS.
I won't be surprised to see ML upscaling, à la DLSS, for every BC game just like they are doing with HDR right now.
 

Scently

Member
Oct 27, 2017
1,464
I won't be surprised to see ML upscaling, à la DLSS, for every BC game just like they are doing with HDR right now.
Yeah that would be a good option for games they can't apply the Heutchy method for. Although DLSS usually involves training and then downloading the trained data for the individual game. Having said that, I wouldn't put it past them to find a general training method to apply to all games, just like the HDR stuff. They only trained it based on Gears5 data and the result is applicable to all games in theory.

I am also curious about their hint on doubling framerate. That's something I always consider should be possible. The only issue is that it might involve touching the game code. Unless they figure out a way to intercept it on the API level and double it without touching the game code. Exciting stuff.
 

DrKeo

Banned
Mar 3, 2019
2,600
Israel
Yeah that would be a good option for games they can't apply the Heutchy method for. Although DLSS usually involves training and then downloading the trained data for the individual game. Having said that, I wouldn't put it past them to find a general training method to apply to all games, just like the HDR stuff. They only trained it based on Gears5 data and the result is applicable to all games in theory.

I am also curious about their hint on doubling framerate. That's something I always consider should be possible. The only issue is that it might involve touching the game code. Unless they figure out a way to intercept it on the API level and double it without touching the game code. Exciting stuff.
That was true for DLSS 1 but DLSS 2.0 actually doesn't need to be trained for each game individually anymore. Just like you've said, Microsoft's HDR implementation doesn't either, it was trained on Gears 5 and now it applies to any game ever made for an Xbox, even 15-year-old games. So it's a possibility.
 

EvilBoris

Prophet of Truth - HDTVtest
Verified
Oct 29, 2017
16,681
just for reference, it wasn't trained on Gears 5, it was trained on a number of different titles. Gears 5 used the results of that training.

The series X is doing something more sophisticated in some other custom hardware, this same hardware may also be used to provide some accessibility options for visual issues , such as colour deficiency.
 

DukeBlueBall

Banned
Oct 27, 2017
9,059
Seattle, WA
So I went dumpster diving into the RDNA whitepaper to see mentions of packed integer for INT8, INT4.
There were some quotes, page 12, 13 and 14

RNDA-Machine-Learning-Ops.jpg


So it appears if things are similar in RDNA2 there might not be a point to run under INT8 to save space in ALUs. However, it's true Microsoft mentions "we added special hardware support for this specific scenario", but it doesn't necessarily mean it's specific to their iGPU. It could simply mean they weighed in on the change. Keep in mind there isn't a point to run integer quantization for every stage of inference, however there is a point to run at least FP16.

If anyone is interested in what GDDR6 IC will be used on the Xbox Series X, I looked up saturday's digital foundry teardown of the console and found it Microsoft uses Samsung ICs, at least on this sample.
  • XSX uses 10 GDDR6 ICs, 6 2GB modules and 4 1GB modules, all rated for 14GBps.
  • The "top" 4 ICs are 2GB/16Gb K4ZAF325BM-HC14 modules from Samsung,
  • "left" and "right" ICs are split into two categories: middle ones are 2GB, "bottom" and "top" ones are K4Z80325BC-HC14 1GB modules from Samsung.
This should confirm the memory layout:
SOC.png

This means the 4 "top" modules closer to the 2 4-core CPU CCX are 2GB modules, and the ones closer to the GPU are split 1GB/2GB modules but the most bandwidth seems affected to the iGPU.

If people wondered what might make a nice addition to a Pro refresh in a few years time, they might switch to faster GDDR6 memory and add new silicon revisions of Zen2 or RDNA2. I don't think Samsung provides a spec sheet for these, but Micron does provide one with further details on clocking.

Pretty sure the GDDR6 modules are from SK Hynix.
2b0vPjQ.png


Also about 384 bit bus. The reason that MS chose 320-bit over 384-bit is because they were running into signal issues with 384bit bus and the ultra fast GDDR6.
 

rokkerkory

Banned
Jun 14, 2018
14,128
just for reference, it wasn't trained on Gears 5, it was trained on a number of different titles. Gears 5 used the results of that training.

The series X is doing something more sophisticated in some other custom hardware, this same hardware may also be used to provide some accessibility options for visual issues , such as colour deficiency.

so very cool!
 

Scently

Member
Oct 27, 2017
1,464
just for reference, it wasn't trained on Gears 5, it was trained on a number of different titles. Gears 5 used the results of that training.

The series X is doing something more sophisticated in some other custom hardware, this same hardware may also be used to provide some accessibility options for visual issues , such as colour deficiency.
That would be DirectML running under INT8 or INT4. That's why they have support for hardware support for those "weights" as they put it in the DF interview.
 

Mollymauk

Member
Oct 27, 2017
4,316
Like I said, it's optimally 1.5 seconds to fill the RAM vs. 2.8 seconds to fill the game RAM at the worst case scenario of a full flush (highly unlikely). More likely, RAM would only partially be needed to be refilled so even for XSX it would be more like 2 seconds or less also.THE DIFFERENCE ISN'T ENOUGH IN ANY REALISTIC GAMING SCENARIOS. It won't make a game scenario impossible in any practical sense on XSX while only possible on PS5. It would have to be some kind of wacky scenario that you go out of your way to put yourself into as a developer.

Also BTW, you are not gonna thrash the SSD that continually anyways, constantly reading 9GB/s from it, heating it up. If that's what Sony's devs plan to do, than that theoretical "approved 3rd party M.2 drive" list just got even shorter.
1.3 seconds = a revolution in game design
 
Mar 22, 2020
87
Pretty sure the GDDR6 modules are from SK Hynix.
2b0vPjQ.png
No they are not, but you're right this is a Hynix part number.

You should have watched the video and not checked only the image I attached below. This is a render and not showcasing anything but placeholder assets for the ICs, especially the memory that isn't even GDDR6. If you actually look up saturday's digital foundry teardown, you will see they used Samsung ICs.
Microsoft uses Samsung ICs, at least on this sample.
  • XSX uses 10 GDDR6 ICs, 6 2GB modules and 4 1GB modules, all rated for 14Gbps.
  • The "top" 4 ICs are 2GB/16Gb K4ZAF325BM-HC14 modules from Samsung,
  • "left" and "right" ICs are split into two categories: middle ones are 2GB, "bottom" and "top" ones are K4Z80325BC-HC14 1GB modules from Samsung.
Now it doesn't mean they will actually end up using that exact IC for every single console, because it is probably an engineering sample and not reflective of larger production models. Only Samsung produces currently 2GB and 1GB ICs in 14Gbps speed ratings. Micron and SKHynix make GDDR6 but only 1GB modules for 14Gbps, and currently do not list 2GB ICs at that speed rating. That's why they end up on current GPUs.

SKHynix no longer provides LPDDR3 product part decoder, and I suspected the chip you used as reference is LP DRAM because "H9" is that product family.
From the product part decoder for LPDDR3 I found elsewhere, you see this IC is actually LPDDR3.
  • H9 - Hynix LP DRAM MCP (multi-chip package)
  • CC - DDR3 Only
  • NN -
  • N -
  • BL - 16Gb quad die package
  • T - 1.2V, x32
  • M - 1st gen die
  • LA - FBGA178
  • R - ROHS compliant
  • N -
  • T - DDR3-1600-CL12
  • M - operating temperature -30°C - 85°C (so also not fit for this use case, it should go up to 105°C)
H9 - CC - NN - N - BL - T - M - LA - R - N - T - M
 

DukeBlueBall

Banned
Oct 27, 2017
9,059
Seattle, WA
No they are not, but you're right this is a Hynix part number.

You should have watched the video and not checked only the image I attached below. This is a render and not showcasing anything but placeholder assets for the ICs, especially the memory that isn't even GDDR6. If you actually look up saturday's digital foundry teardown, you will see they used Samsung ICs.

Now it doesn't mean they will actually end up using that exact IC for every single console, because it is probably an engineering sample and not reflective of larger production models. Only Samsung produces currently 2GB and 1GB ICs in 14Gbps speed ratings. Micron and SKHynix make GDDR6 but only 1GB modules for 14Gbps, and currently do not list 2GB ICs at that speed rating. That's why they end up on current GPUs.

SKHynix no longer provides LPDDR3 product part decoder, and I suspected the chip you used as reference is LP DRAM because "H9" is that product family.
From the product part decoder for LPDDR3 I found elsewhere, you see this IC is actually LPDDR3.
  • H9 - Hynix LP DRAM MCP (multi-chip package)
  • CC - DDR3 Only
  • NN -
  • N -
  • BL - 16Gb quad die package
  • T - 1.2V, x32
  • M - 1st gen die
  • LA - FBGA178
  • R - ROHS compliant
  • N -
  • T - DDR3-1600-CL12
  • M - operating temperature -30°C - 85°C (so also not fit for this use case, it should go up to 105°C)
H9 - CC - NN - N - BL - T - M - LA - R - N - T - M

Now that's attention to detail x 10!
We should be friends. :p
 

Black_Stride

Avenger
Oct 28, 2017
7,388

Deleted member 12635

User requested account closure
Banned
Oct 27, 2017
6,198
Germany
Before even watching this.
I can already tell its some clickbait shit...not gonna bother wasting my time.

If I needed a review of the case Ill watch a review of the case.
Yeah, just saw it on my Youtube Feed without watching it. Crazy, he uses an Intel CPU .... Noooooooo

Edit:
You were right. Uses an Intel CPU and a Nvidia card ....