Oh. Thanks for the clarificationNo, he speculates that Navi12Lite and Navi21Lite could be Arden, he doesn't know if these are Arden.
Oh. Thanks for the clarificationNo, he speculates that Navi12Lite and Navi21Lite could be Arden, he doesn't know if these are Arden.
I dunno if a next gen console can get 1.8ghz clocks with more than 40 CU's, but i also expect that a cut down 5700 is too weak for next gen & Sony/Microsoft are aware it's in truth a budget mid range GPU that is for some reason $450, they will likely want something with more punch, but who knows.Because one of these consoles is rumored to be 390+mm2, and adding CU's isn't nearly as detrimental to power draw as raising clocks speeds.
IE, the GPU will be more efficient per watt with more CU's and Lower clocks
If it's really 390mm than 40 CU's doesn't add up
No, he speculates that Navi12Lite and Navi21Lite could be Arden, he doesn't know if these are Arden.
Arden and Argalus codenames are parallel to some other things.
I need to clarify this.
I don't know much about what Arden and Argalus mean, and ofc they're Xbox related codenames just like Brad Sams said.
Do we have any clue what that can be? Kits for xcloud maybe? Or something Surface related?According to hmqgg Arden and Argalus are not even Anaconda and Lockheart. Rather something else.
Reminds me that brad sams reported that specifically xcloud has tf number above stadia's 10.7tf, so maybe they needed a rdna 2.0 chip based off the navi 20 to reach that on the server.According to hmqgg Arden and Argalus are not even Anaconda and Lockheart. Rather something else.
According to hmqgg Arden and Argalus are not even Anaconda and Lockheart. Rather something else.
Well if those are meant to be servers, then it makes sense to find them on Desktop linux commits. But those APUs are not for home consoles.
I think you're wrong on this.Like I've explained in the past. Power limitations only matter insofar as more cooling is required. GDDR6 doesn't need its own active cooling.
Well if those are meant to be servers, then it makes sense to find them on Desktop linux commits. But those APUs are not for home consoles.
Do we have any clue what that can be? Kits for xcloud maybe? Or something Surface related?
Reminds me that brad sams reported that specifically xcloud has tf number above stadia's 10.7tf, so maybe they needed a rdna 2.0 chip based off the navi 20 to reach that on the server.
Dont expect a navi 20 derivative chip on a console everyone.
I don't see any clock info. Where are you seeing them?Does seem to be for consoles with that clock and naming scheme.. Things are becoming interesting
I agree it seems cloud gaming relatedWell if those are meant to be servers, then it makes sense to find them on Desktop linux commits. But those APUs are not for home consoles.
New wild theory: all those Navi lite APUs could be aimed for cloud gaming, hence the gaming reference in the serials.
People overexcited for Cell 2 obviously
People want their favorite plastic box to have more flops and special sauce
There is something odd about the data we have so far. I don't doubt what any respectable reporters have heard, so I guess I'll say it's 60% likely that PS5 is more powerful at this point (my own random ass guess). But at the same time we have an enormous Anaconda die and we know MS was gunning for some sort of high end, premium console with the two console strategy.
So, if PS5 is more powerful, it needs to be VERY powerful. Not like 10TF or 11TF or maybe even 12TF. And that would feel like a significant shift in Sony's strategy. Back to the PS3 days in a way, very premium and likely very expensive. Obviously they are a lock to sell the most units next gen but I do wonder if they're giving MS a foothold for a $350 Lockhart to grab a bunch of sales while the PS5 sits at $500+.
Other possibilities:
- Sony is willing to lose a boatload of money on hardware
- The rumor is wrong
- The die size estimate is inaccurate
People want their favorite plastic box to have more flops and special sauce
And this just the calm before the storm of the big leaks. We are not ready for when jason clicks publish lol.
This is what I hope for PS5 and XB2 personally
" A CPU that's 32-bit & 733 MHz, custom Intel Pentium III Coppermine-based processor. Has a 133 MHz 64-bit GTL+ front-side bus (FSB) with a 1.06 GB/s bandwidth. The system has 64 MB unified DDR SDRAM, with a 6.4 GB/s bandwidth, of which 1.06 GB/s is used by the CPU and 5.34 GB/s shared by the rest of the system. "
If you think about it, If Reiner got some dev talking, than Jason 100% got some info tooSeems like the new navi 10 post by komachi further points at HBCC and the HBM + DDR4 rumor.
And this just the calm before the storm of the big leaks. We are not ready for when jason clicks publish lol.
And this just the calm before the storm of the big leaks. We are not ready for when jason clicks publish lol.
If I can say, the playstation fan seem the less interested of the two to win the specs war.Welp, there's the secret sauce, game over Playstation won, Reiner was right.
So just another Tuesday lol.
Seems like the new navi 10 post by komachi further points at HBCC and the HBM + DDR4 rumor.
Personally as long as both systems are a true generational leap and are priced reasonably then it doesn't matter what one is more powerful, especially if the difference is minimal.If I can say, the playstation fan seem the less interested of the two to win the specs war.
Ddr4, not gddr4In what way HBM2 + GDDR4 setup will be more beneficial than just DDR6? Power consumption?
According to doubtful Rambus presentationIn what way HBM2 + GDDR4 setup will be more beneficial than just DDR6? Power consumption?
So what's the downside?Ddr4, not gddr4
And its more beneficial in terms of cost, power consumption and efficiency
Doubtful?
Lower bandwidth overall and the amount of high bandwidth memory is smaller (8gb). This solution is coming from the assumption that developers wont need more than 8gb of fast memory and they can use the ddr4 or the ssd for everything else.
It was debunked by some dev here that gddr6 power consumption is not as bad.
Can you point to this "debunk"? Because most of the posts here do seem to suggest that hbm and ddr4 will use less tdp than gddr6It was debunked by some dev here that gddr6 power consumption is not as bad.
so clever thenDdr4, not gddr4
And its more beneficial in terms of cost, power consumption and efficiency
https://www.resetera.com/threads/ne...-anaconda-dont-want-none.112607/post-21043234 but not sure is anexanhume dev ;) Edit: maybe it will use less tdp but also will have less bandwidthCan you point to this "debunk"? Because most of the posts here do seem to suggest that hbm and ddr4 will use less tdp than gddr6
https://www.resetera.com/threads/ne...-anaconda-dont-want-none.112607/post-21043234 but not sure is anexanhume dev ;) Edit: maybe it will use less tdp but also will have less bandwidth
Ok but 8gb for gpu seems small for nexgen and reading from ddr4 is too slow for gpu so you get less power consumption for inferior resolution, nothing breathtakingWe have a mesure coming fromTom Hardware mesurement it is 16 watts for 8 Gb of GDDR6, 32 watts for 15 GB and 48 Gb for 24 GDDR6* but power consumption is linked to bus size and for a 382 bit I imagine between 48 to 58 watts and all of this without memory controller. It seems they are count inside GPU power consumption...
It seems 8 Gb of HBM2 is 10 Watts and 16 Gb of DDR4 is nearly 7/8 watts, you have a non negligible gain
* seems power consumption grows linearly with memory quantity seeing DDR4 mesurement
Yeah. It looks like gddr6 is a pretty safe solution for devsAbout the HBM2 + DD4 solution, I think the fact that we're dealing with a split memory pool will be a bigger issue than the HMB2 portion having slightly lower bandwidth than GDDR6. Sure there are supposedly some shenanigans going on to make the two memory pools look like one pool of memory and t the memory is supposed to be automatically allocated but what if a developer decides to use 12 GB or more of the 24 GB as VRAM? The Next-Gen Xbox would be at a rather big advantage in that case and that scenario doesn't sound all that unrealistic especially when we look at how the unified memory was used this gen.
We have a mesure coming fromTom Hardware mesurement it is 16 watts for 8 Gb of GDDR6, 32 watts for 15 GB and 48 Gb for 24 GDDR6* but power consumption is linked to bus size and for a 382 bit I imagine between 48 to 58 watts and all of this without memory controller. It seems they are count inside GPU power consumption...
It seems 8 Gb of HBM2 is 10 Watts and 16 Gb of DDR4 is nearly 7/8 watts, you have a non negligible gain
* seems power consumption grows linearly with memory quantity seeing DDR4 mesurement
Ok but 8gb for gpu seems small for nexgen and reading from ddr4 is to slow for gpu so you get less power consumption for inferior resolution, nothing breathtaking
Thanks! From my quick reading, this (on a very rough level) means they'd be responsible for generating the clock signal, or otherwise maintaining sync?
Again, no official word from AMD but I think it'd be extremely surprising if they're limited that way.
Yes, adding 8 WGP would only require 36.4mm^2. But we know that Anaconda also has a wider memory bus than Navi 5700, so there's an extra 13.19mm^2 there too. Thus ~371mm^2 for 26 WGP. But that's assuming no RT hardware; if we add such, we're now talking about 381mm^2. Still theoretically possible, but I think there's another concern. In this setup you've got 30% more stream processors than 5700XT, but still the same amount of cache for them to share, along with other support silicon. I'm definitely no GPU designer, but I doubt there's that much wiggle room in how these pipelines are rightsized to each other. So either you shouldn't add quite as many WGPs per SE, or else you can but you also have to add the appropriate supporting hardware, which makes the overall size go up yet again.
It's not impossible, but I think this is the maximum to expect. And a little lower on the WGP count would probably be more plausible.
Do you think they will eventually switch to 5nm? Or will that be ignored, and they go straight to 3nm GAAFET instead?
Because RX5700 XT with 40 CU at 1.8Ghz already has a 225W TDP (With an estimated power consumption of 160W) and a die size of 251mm2.
People will have to explain where they go from here and add like 35% more CU when the GPU alone eats up 160W and the CPU might tap 40W.
Because one of these consoles is rumored to be 390+mm2, and adding CU's isn't nearly as detrimental to power draw as raising clocks speeds.
IE, the GPU will be more efficient per watt with more CU's and Lower clocks
If it's really 390mm than 40 CU's doesn't add up
Forgive me if I missed something, but according to the Gonzalo leak it looks like the L3 cache on the CPU is half the size of the desktop parts. It makes sense to cut back there a bit since they're already a huge upgrade, and I got the sense from the DF video that the L3 takes up a lot of space. Is that something that everyone is already taking into account, or is it not enough of a difference on the scale of a console APU to matter?
Was this posted already??
By Komachi:
Navi10Lite
Navi12Lite
Navi21Lite
Also Navi10Lite has two GFX IDs. With Vega if a GPU had two GFX IDs, HBCC was enabled.
This could possibly mean that Navi10 has HBM??
Of course nothing is confirmed 100%
Why would consoles APUs be found on Desktop Linux commits ? It doesn't make sense to me. Were the PS4, Pro, XB1 and Scorpio APUs also found there ? Whatever those are, they are not home consoles APUs IMO.
About the HBM2 + DDR4 solution, I think the fact that we're dealing with a split memory pool will be a bigger issue than the HMB2 portion having slightly lower bandwidth than GDDR6. Sure there are supposedly some shenanigans going on to make the two memory pools look like one pool of memory and t the memory is supposed to be automatically allocated but what if a developer decides to use 12 GB or more of the 24 GB as VRAM? The Next-Gen Xbox would be at a rather big advantage in that case and that scenario doesn't sound all that unrealistic especially when we look at how the unified memory was used this gen.
EDIT:
I mean this would still be my preferred solution. Would make next gen more interesting and the tdp gains could be invested in higher clocks but there are some significant downsides.
Memory chip power consumption also deoends on the speed of the chips (i.e. memory clocks) and thus voltages.
Speaking with Buildzoid, we know that Vega: Frontier Edition's 16GB HBM2 pulls 20W max, using a DMM to determine this consumption