2:1 ratio just isn't how lossless compression works. Both numbers will be inflated under some scenarios, ultimately it will depend more on the game dataset how close to that it lands rather then algorithms.
Makes a lot more sense when you look at it this way. Thank youEveryone, please listen to this. We already know that BCPack is better, due to the already released numbers. The 4.8GB/s is WITH BCPack already, and it's a 100% increase over the 2.4GB/s raw speed. PS5 only gets a 64% increase from 5.5GB/s to 9GB/s. So yes, it's better, but we already know how much better and it's not nearly enough to make up for the massive advantage PS5 has here.
This article is basically a clickbait article with no actual information in it.
This too. I would imagine their hardware engineers thought long and hard about what SSD speeds they needed when they built this machine. Sony and MS just decided to spend their budgets differently, and that's ok. PS5 will stream assets faster and hide pop-in, Xbox will render with greater fidelity.I agree with this.
Now the real question is however, does it actually matter? Will the XBOX be bottlenecked in feeding assets to the VRAM in 99% of game scenarios? The answer is probably no. At best maybe Sony first party can come up with some wild gameplay elements that need to rewrite VRAM textures at an incredible pace. I'd venture that they'd have to go out of their way to highlight this strength however. Outside of that faster loading would be the main benefit which is pretty cool.
What I recall in xb360(i think it was back then) days was splitting lut and colors into separate blocks that compress better on average(and potentially use different compressor for each to maximize efficiency).
It might very well be but i consider the geldreich guy to be unreliable. Hes gone off a few times about being the target of psychological torture at valve lol. That doesnt have anything to do with this but i raise an eyebrow when he does any shit stirring.
Is that Ging Sama person in the tweet the same person that posts here? The person is insufferable
yesIs that Ging Sama person in the tweet the same person that posts here? The person is insufferable
can't believe we are comparing texture comparison algorithm now.
what's next? Are we gonna compare the compiler used next?
Ah yes, that brings back memories of the early PS3 era and compiler auto vectorization. I'd forgotten about that, thank you. :)can't believe we are comparing texture comparison algorithm now.
what's next? Are we gonna compare the compiler used next?
Ah yes, that brings back memories of the early PS3 era and compiler auto vectorization. I'd forgotten about that, thank you. :)
Because they use the best case (this is why they did not use the typical into the spec sheet). They use the BCPack best case scenario a 2 to 1 compression ratio and use it like texture will be the only type of assets.
Ignoring all this platform warring bullshit on Era for a moment, as a Dev, are you excited for next gen? Would love to hear your thoughts.People are trying to make news out of everything. :)
Can't blame them, though, not a lot of technical info is out there, but hopefully that will start being released the closer we get to these platforms launching.
The 4.8GB/s is WITH BCPack already, and it's a 100% increase over the 2.4GB/s raw speed. PS5 only gets a 64% increase from 5.5GB/s to 9GB/s. So yes, it's better, but we already know how much better and it's not nearly enough to make up for the massive advantage PS5 has here.
massive advantage? How will this "massive advantage" translate in 3rd party multiplatform games?
How much does Sony pay more for the SSD itself and for the cooling of the SSD just to gain this advantage?
I would say, Microsoft approach is way smarter and in real world scenarios for 3rd party multiplatform games we won't see much of a difference except for loading screens. But MS is paying for that way less.
what is it at the moment with all these accounts with low post counts yet just shitting on the ps5 and promoting the series x in every next gen thread with nearly every post?
is the same happening with People shitting on series x and promoting ps5 with low posts?
you can spot them straight away, no avatar and clearly warring.
we don't know how 3rd parties will use the ssd until we see games or 3rd party devs openly speak about it. I can imagine rockstar might use it.
Xbox x has a large power advantage over pro and you barely see many 3rd parties doing much with it Except for higher res at sometimes the cost of the performance.
I don't know, especially if everything is countered with "but company X can do that, too!".Man Kraken isn't even exclusive to Sony just like BCPack won't be exclusive to Microsoft.
Why are people making a big deal over this :S
Im going to post in a thread where i have absolutely no idea what any of it means and declare x console is superior to y console.
Well, bish had to bring out the lawnmower back at the old place at the start of this gen because there were so many marketing accounts active at the time, so maybe we're seeing a repeat of that.
Or maybe people just dusted off their alts / created new alts for platform warring.
Yes.
That's not how this works. the 4.8 GB/s number for XSX and 9 GB/s for PS5 is with BCPack and Kraken in use. As many posters have pointed out, the XSX numbers show a significantly higher boost going from 2.4 to 4.8, compared to the PS5 SSD bandwidth going from 5.5 to 8/9.That is a very clickbaity headline and an article that is very much a console wars.article.
But just curious for those who halfway know what they are talking about, in a hypothetical situation if the XSX compressed 50% efficiency versus the PS5 at 30% at their reported speeds of ~4.8 and 8/9GB respectively which console would end up on top? I get there's a lot we don't know and things we do know that make the comparison still an apples to oranges comparison, I'm just curious if the math even checks out when the PS5 is so much faster as a baseline.
There is also the possibility of using it as non-volatile, slower RAM. The PS5 SSD can match mid-tier DDR2 RAM without compression, low/mid-tier DDR3 if the compression numbers hold up. That's fast enough to potentially be used as RAM for some workloads, depending on the latency. While nothing concrete has been said about that so far, we do know that the PS5 SSD controller supports an unusually high number of priority levels. That does seem to indicate that they expect it to need to handle some very low-latency operations.I agree with this.
Now the real question is however, does it actually matter? Will the XBOX be bottlenecked in feeding assets to the VRAM in 99% of game scenarios? The answer is probably no. At best maybe Sony first party can come up with some wild gameplay elements that need to rewrite VRAM textures at an incredible pace. I'd venture that they'd have to go out of their way to highlight this strength however. Outside of that faster loading would be the main benefit which is pretty cool.
I wouldn't necessarily assume that feature is a Microsoft exclusive.
Sony hasn't talked about it, but they haven't talked about anything GPU related really. Their entire reveal was SSD, 3D audio and "oh! we have hw raytracing".
Sampler feedback is a feature coming on desktop as well on RDNA2 hardware, so it's very possible Sony has the needed hardware changes in there as well.
On the other hand it's a feature designed by Microsoft in collaboration with AMD and requires hardware changes to the texture unit block, so it might also be something that's not been made available to Sony.
If the latter was the case, it's a feature that when used properly has the chance to greatly reduce memory usage and the amount of data that needs to be streamed from the disk.
Ignoring all this platform warring bullshit on Era for a moment, as a Dev, are you excited for next gen? Would love to hear your thoughts.
But would that then not be a software solution when bcpack is done in hardware (as in: dedicated silicon)? Or do I miss something here?
I think the question isn't necessarily compared to Kraken. It seems unlikely that PS5 will only support that; far more plausible is that they'll support a wide range of formats already in use. So the question would be, say, how much more efficient is BCPack than BC7?
'there are lots of things being talked about on both sides that can actually end up being standard rdna 2 features or MS/Sony adaptations vis drivers etc) of those features. going to take a while to see which ones are properly available on which platform
Sampler feedback does seem like a potentially big one - effectively reducing bandwidth used will help a lot
Yeah until Sony publicly confirms exactly what went inside of PS5 from the standard set of RDNA2 features it's hard to tell.
Features like samplers feedback require a modified texture unit to work and cannot be emulated via software tho (I cannot think of a way to do it at least).
I'm fairly certain the hardware support for SFS is part of RDNA2. Microsoft's own Github page on the feature mentions that there are already multiple GPU's compatible with the feature.Yeah until Sony publicly confirms exactly what went inside of PS5 from the standard set of RDNA2 features it's hard to tell.
Features like samplers feedback require a modified texture unit to work and cannot be emulated via software tho (I cannot think of a way to do it at least).
Thanks a lot for your effort!Unless I missed something there's no confirmation about hw decoding for BCPack. And it's probably not necessarily needed. Usually formats like these are created with decompression speed in mind, it's not unusual to decompress on the CPU at nearly RAM speed. The biggest win of dedicated hw wouldn't be the throughput, but the removal of that burden from the CPU to a dedicated hw block. It's a tradeoff between the cost of additional silicon and the CPU decompression cost. This is another thing where sampler feedbacks would help, as you wouldn't waste time (ideally) decompressing mip levels you don't need.
Maybe I've picked up on it wrong, but it seems like the logical conclusion of work that (afaik) started with Lionhead's work on megameshes and for milo&kate - which basically was a kind of feedback based virtual texturing system. They rendered a low res version of the scene up front and used that to determine which textures and what levels to bring into their physical memory page. You could do it at full res also as part of an early-z pass. Then defer your texture sampling as late as possible in the frame, in the hope that by the time you need the textures, more of what you pinpointed you would need in that early pass will be in memory.
With lower latency and higher bandwidth streaming it might be possible to do that now without, or with fewer, temporal artifacts. I wonder how much texture data you could shuttle from ssd to memory in - say - 10ms. I think ideas like that might come back en vogue, or more 'precise'/aggressive versions of that might come to the fore again, if the performance is there to bring in texture data intra-frame, or even within 1 or 2 frames. (And funnily enough, this is one area where I think the sky is the limit on useful SSD performance).
Sampler feedback sounds like a way to make this easier and/or more precise. But if you were trying to do the same in software, I think you probably wouldn't try to emulate the texture unit behaviour used therein - you'd skin the cat a different way. You'd approach the problem from another vector, like an early pass over the scene, or a copy-back of the last frame as in occlusion culling. I think independent of new hardware support, I think you'd see new experimentation in virtual texturing purely because of the bumps in gpu and ssd performance - although I hope Sony also has new supports like that in hardware too, just because it would make that work easier.
I'm fairly certain the hardware support for SFS is part of RDNA2. Microsoft's own Github page on the feature mentions that there are already multiple GPU's compatible with the feature.
Again, I'm just going by the DirectX Github that says multiple GPU's can support the feature (it says to contact GPU vendors to confirm which GPU's support it, but it also gives a command to type in to check if the GPU you're currently working on supports it). It's basically a more advanced version of PRT, and apparently is sometimes referred to as PRT+.I *think* the original research and idea for it came from Microsoft, and if that's the case it's not unreasonable to think Sony did not have access to it even tho the base architecture is still RDNA2. It would be nice for Sony to publicly confirm all these details but I suspect it's gonna be quite a while before it happens.