It has testing for exactly the same clock frequencies of the PS4 and PS4 pro, which is assumed would be for backwards compatibility purposes. It would be very strange for new chips to be testing for those odd frequencies unless it was somehow related to Playstation.Who? I don't know, do we have any solid proof that it's Sony? Or that it's related to PS5? All I'm seeing, have have seen are CU counts equivalent to PS4.
On the other hand an arms race for exclusive content has been extremely good for creators of TV shows and Movies, with large budgets and lots of freedom being offered. I do think that people get worried about the unknown all the time. Look at all of the directors who originally raged about how Netflix making original movies and not giving them a theater releases would ruin the industry. That has not proven true at all, and now a lot of creative directors are choosing to work with Netflix.......
Times change, gotta keep up!
This is a completely warrantless assertion. Further, it ignores the fact that developing novel features, means of production, or marketing can be a fruitful path to expanding success. If disparate highly-qualified design teams repeatedly avoid a new approach, it's not because they're a bunch of mindless sheep. It's because they've assessed the possibility, and agree with other teams' previous judgment it's not worth it.Consoles had traditionally offered one unit for no other reason other than that was what had always been done.
This is all covered in my post, so I'm not sure what your point is.To also compound this, the money is not on the hardware. The hardware is simply a medium to get you more software sales. Everyone expects to sell the console at a loss in the first year (exception being Nintendo), and those costs come down as manufacturing gets better with volumes improving. So the idea is to get more people into the ecosystem, with the company looking to make a bigger loss on the lower end of the market as this is what is targeting the most price sensitive consumers.
Thanks for more detail. The just-so story I laid out wasn't ever meant to imply that Oberon got bigger and more performant with the steps. Rather that architectural improvements and tweaks developed by AMD's Navi team after Oberon was finalized were rolled back into it. This to provide a more authentic response to dev work forced to proceed on the Oberon hardware (though still limited by its notably lower performance than "Titania").It's very difficult to add features in a stepping. There's just no room to do so if you did a good job on the base design. I've done it before, but you can't do something like adding compute ability. It's more like you figured out a situation in debug where performance was limited by something so you add a feature to mitigate it.
I recall discussion in a previous OT about some AMD RT tests, or claims, that seemed to indicate their raytracing solution did indeed scale better with CUs than clocks. I could be remembering wrong. Perhaps anexanhume or others have a better handle on this information.So, if I am understanding this correctly, under your speculation, the capability of RT HW will be directly tied to the no. of CUs within the GPU. Consequently, by the sheer virtue of having more more active CUs, XsX's RT capability will be comparatively better that PS5's (unless, in the hypothetical event where PS5's GPU is clocked to a frequency which results in it being as performant as XsX's GPU, despite having a lower active CU count).
Because "Titania" wouldn't be available until less than a year before launch. But due to the strategy change, Oberon is already around. And it shares a good amount of hardware with its successor. So you can make the best of a bad jon and test revisions by stepping it instead. Then you've got a collection of lessons learned that can be rolled together into a single step on "Titania".1) If Sony has a secrete APU that isn't Oberon, why do they keep spending millions on Oberon? It seems like we are on stepping E, if not higher by now, so why keep spending million on iteration after an iteration that they already knew in late 2018 that it will never see the light of day?
In my layout of the reasoning, "Titania" is almost 18 months behind Oberon on the production timeline. Are the database leaks we have from 2018? No. So at this same point in its development track, Ariel/Gonzalo/Oberon hadn't been spotted publicly yet either.2) How come we haven't heard or seen a single leak regarding this secret silicon? We keep hearing about every one of the APUs again and again in different DB leaks, but this one has somehow avoided all of that.
Technically, it didn't say "full CU count". It says a BC-test metric is a "full chip result". That could definitely mean there's no more hardware available. But it could also plausibly mean this is the result when using the hardware to the fullest it allows for BC.I would support your though process to some degree if not DrKeo found out that Oberon Gen2 (native mode) was explicitly mentioned as 36CU full count!
You yourself have pointed out that AMD is about to reveal new progress in the Navi family that could raise the clock sweet spot 10 to 20%. The 5700XT already runs at 1.75GHz sustainably. A 15% lift would be 2GHz. That'd make a 48 CU chip hit 12.3 TF.And it's not just the comment "full chip results", there is also a file with the same results that is called "oberonA0_regression_result_native". In addition, if it's just the native PS5 clocks and not the full CU count, does anyone actually believes that the PS5 is over 40CU AND running @2Ghz?
The TF calculation is very simple. And since CU counts are always even, and extreme low and high values can be thrown away, that leaves few possibilities. You can stumble on the right one very easily--especially with very rounded clock figures too, like 1.8 or 2. What'd be impressive is a 9.272 TF prediction that came true. Because that requires a very precise clock figure.Well to guess the gpu of oberon 9 months in advance till the third decimal correctly is pure luck I suppose 😜
There are multiple proposals for how Oberon's leaked tests may not be indicative of final PS5 hardware, and not all of them require it to be a completely separate chip.Why do people think Sony is creating a halfway house custom APU to test backwards compatibility? They've already successfully tested the method in the PS4 Pro functionally another APU which isn't representative of the final silicon seems to be a tad pointless and extremely expensive.
I'm pretty certain GitHub doesn't state a TF number for Arden. Only a CU count. (And other unrelated tech details, such as theoretical RAM bandwidth.)Both Windowscentral and Github state 12TF.... We've been discussing Github for the past 200 pages.
This is incorrect. You're right that an RTX compute unit is ~20% bigger than prior Nvidia architecture. But most of this is the improved tensor cores. Only about 8% comes from the RT cores. On Navi, 8% of a CU is about .2mm^2 (note the decimal). That's about 12mm^2 total for a 60 CU chip.but what about RT? we don't know the space on die size AMD solution will need. Nvidia 1st gen gets almost 20%
No. An 8-core Zen2 from the desktop range is 76mm^2. However, we have a benchmark leak strongly suggesting that PS5 is using a version where L3 cache is massively reduced, leaving a 50mm^2 total size. (That's measured/calculated, not rounded off to the nearest 5 or 10.)
Let's assume PS5 retail is much stronger than Oberon. Next a question: do you think PS Now will be able to stream PS5 games? This seems certain to me. But if so, why design server hardware that can only play PS4 games? In your scenario, Oberon can't match the retail PS5. So now you need twice as many server blades to operate PSNow: Oberon, and also something to do PS5-level titles. There doesn't seem to be any upside to that.[/QUOTE]What if Oberon is PS Now related and will be used as future server innards for PS4 games to run Native 4K?
I actually think the hardware in either the PS5 or Series X should be able to do all of this. The Xbox One X ran a lot of games at 4k/30 with a CPU 4-5 times less powerful, a GPU that will likely be half as powerful, much less and slower RAM, and pathetic 5400 RPM hard drives. I mean even the PS4 Pro delivered true 4k in some games and it had 1/3rd of the GPU power we are likely to see. We are going to see some awesome things this gen. I'm really excited!4k/60?
1080p/1440p/120?
4k/30 with super realistic graphics /bigger worlds?
8k like they mentionned probably for less demanding games?
Super audio 7.1 Atmos/DTS EX and whatnot?
I'm probably forgetting others..
Anyways, my other question is if all the above fits your expectations or some.
Can all the "confirmed" hardware no matter the console achieve these expectations above and would 9tf be enough?
At this point. If Sony said anything, there will be spin to counter it. Guys putting up graphs here, neat assumptions in dolled up spreadsheets paraded around in bad faith, Im sure. And after PS5s reveal I guarantee that BUT I'm sure Sony will give us some white paper specs. That way we have hard facts to work with.Wait until there is the first reveal of the ps5.
They will show a big black box with a lot of cooling grids and say " 3 times the gpu power of a ps4 pro if you do the math " (4.2tf x 3 = 12.6tf)
That is when the fun times start.
[/QUOTE]This is a completely warrantless assertion. Further, it ignores the fact that developing novel features, means of production, or marketing can be a fruitful path to expanding success. If disparate highly-qualified design teams repeatedly avoid a new approach, it's not because they're a bunch of mindless sheep. It's because they've assessed the possibility, and agree with other teams' previous judgment it's not worth it.
This is all covered in my post, so I'm not sure what your point is.
Thanks for more detail. The just-so story I laid out wasn't ever meant to imply that Oberon got bigger and more performant with the steps. Rather that architectural improvements and tweaks developed by AMD's Navi team after Oberon was finalized were rolled back into it. This to provide a more authentic response to dev work forced to proceed on the Oberon hardware (though still limited by its notably lower performance than "Titania").
I recall discussion in a previous OT about some AMD RT tests, or claims, that seemed to indicate their raytracing solution did indeed scale better with CUs than clocks. I could be remembering wrong. Perhaps anexanhume or others have a better handle on this information.
Because "Titania" wouldn't be available until less than a year before launch. But due to the strategy change, Oberon is already around. And it shares a good amount of hardware with its successor. So you can make the best of a bad jon and test revisions by stepping it instead. Then you've got a collection of lessons learned that can be rolled together into a single step on "Titania".
In my layout of the reasoning, "Titania" is almost 18 months behind Oberon on the production timeline. Are the database leaks we have from 2018? No. So at this same point in its development track, Ariel/Gonzalo/Oberon hadn't been spotted publicly yet either.
There's a hard stop at the end of all this--actual launch!--so "Titania" has to proceed a little faster than them. But even so, the calendar I explicated means that the bigger hardware literally didn't even exist until one or two months ago. It'd hardly be a surprise if it hasn't had a chance to surface visibly yet.
(Lest others misconstrue my intent, I'll repeat that I think there's too much special pleading in this scenario to make it likely. I just intended to show that it's not impossible due to time constraints, as many have incorrectly claimed.)
Technically, it didn't say "full CU count". It says a BC-test metric is a "full chip result". That could definitely mean there's no more hardware available. But it could also plausibly mean this is the result when using the hardware to the fullest it allows for BC.
You yourself have pointed out that AMD is about to reveal new progress in the Navi family that could raise the clock sweet spot 10 to 20%. The 5700XT already runs at 1.75GHz sustainably. A 15% lift would be 2GHz. That'd make a 48 CU chip hit 12.3 TF.
The TF calculation is very simple. And since CU counts are always even, and extreme low and high values can be thrown away, that leaves few possibilities. You can stumble on the right one very easily--especially with very rounded clock figures too, like 1.8 or 2. What'd be impressive is a 9.272 TF prediction that came true. Because that requires a very precise clock figure.
There are multiple proposals for how Oberon's leaked tests may not be indicative of final PS5 hardware, and not all of them require it to be a completely separate chip.
I'm pretty certain GitHub doesn't state a TF number for Arden. Only a CU count. (And other unrelated tech details, such as theoretical RAM bandwidth.)
This is incorrect. You're right that an RTX compute unit is ~20% bigger than prior Nvidia architecture. But most of this is the improved tensor cores. Only about 8% comes from the RT cores. On Navi, 8% of a CU is about .2mm^2 (note the decimal). That's about 12mm^2 total for a 60 CU chip.
Of course, AMD's solution might require more space. But there's no evidence for that yet.
No. An 8-core Zen2 from the desktop range is 76mm^2. However, we have a benchmark leak strongly suggesting that PS5 is using a version where L3 cache is massively reduced, leaving a 50mm^2 total size. (That's measured/calculated, not rounded off to the nearest 5 or 10.)
Let's assume PS5 retail is much stronger than Oberon. Next a question: do you think PS Now will be able to stream PS5 games? This seems certain to me. But if so, why design server hardware that can only play PS4 games? In your scenario, Oberon can't match the retail PS5. So now you need twice as many server blades to operate PSNow: Oberon, and also something to do PS5-level titles. There doesn't seem to be any upside to that.
better to see a dr about that, mate.OK friends, I got a tingling in my balls about tomorrow. Going to bed and hoping for a pleasant surprise 😃
Thanks for posting this, I've never heard it before. That is a great podcast if you want to understand what Microsoft is doing with GamePass. So much quality content there.......Watch how Phil talks about Scorpio this entire podcast.
This is before the major studio acquisition announcements and the entire gamepass strategy... or even what the One X was to be called.
It's either good news or a tapeworm. I'm rooting for good news!!! 😃
Lol Wow.
Don't do this to yourself. Tomorrow is only a hope, it's never a given. Once you are old you know that 99.9% of tomorrow sucks, if you have no control over it. Better to come to this realization early on. No one of us has tomorrow for sure.It's either good news or a tapeworm. I'm rooting for good news!!! 😃
Believe it, and then add storage and. a new CPU.we are at no less than our wildest imaginations, but it does all appear to be true.Vega 64 GCN
4096 Shading Units
256 TMUs
64 ROPS
64 CU
12.58TF
5700 XT RDNA
2560 Shading Units
160 TMUs
64 ROPS
40 CU
9.754TF
If you want to believe those 12 TF are RDNA flops. Then you're looking at a gpu that outperforms a $400 gpu in a $499/$599 console. I just find that hard to believe.
Believe it, and then add storage and. a new CPU.we are at no less than our wildest imaginations, but it does all appear to be true.
Don't get me wrong but I'm inclined to disagree with almost everything he said. The devil is in the details and the meaning of words can be totally different despite them being the same.
Big changes to an existing silicon won't happen a year prior to launch. A completely different silicon which was designed in parallel can happen a year prior to launch easily.
We tend to revisit the same things every 200 pages or so....... :)wait! Isn't there a segment about this on the first post on this OT about if it's RDNA of GCN?
why are we back to this topic?
If I had to bet I would say new SKUthese changes will be awesome but will they come in a patch or an entirely new SKU?
I can understand why devs are afraid.there was a developer poll about game subscription services on the Road to GDC 2020 poll where it was split with slightly more developers saying it will harm the industry
This has to be a statement that does not take into account a lot of business cases that are all there, and why it is they have failed.This is a completely warrantless assertion. Further, it ignores the fact that developing novel features, means of production, or marketing can be a fruitful path to expanding success. If disparate highly-qualified design teams repeatedly avoid a new approach, it's not because they're a bunch of mindless sheep. It's because they've assessed the possibility, and agree with other teams' previous judgment it's not worth it.
I think it was clear.This is all covered in my post, so I'm not sure what your point is.
Why you only need 24hr to a week max to announce a state of playMy dumbass theory.
If Sony doesn't announce a PS5 reveal event the last day of January or the First day of February then a February reveal is dead.
Digital Foundry also confirmed that was about right with one of their sources, and the die size being basically 400mm^2 means its probably right around that unless there is some other secret sauce taking up all that space (esram anyone? haha)and round we go. It's confusing but people seem set on it being 12. I agree there is wiggle room in the specific way Spencer said twice as powerful, and others interpreted that as 12TF
However, they haven't been denied and Spencer retweeted one of the articles. If it wasn't 12TF you'd think they'd head it off quickly as the backlash if it was 9-10 (rightly or wrongly) would be messy
It is entirely possible that Sony has been developing, in parallel, two completely different SOC's. I've said before it would be costly to do it, but you seem very convinced about it and I will concede that while I've never heard of such a thing happening since the Dreamcast days, it not impossible and therefore becomes simply a difference of opinion.
I recall reading that MS were designing two mid-gen chips, one for 2016 and one for 2017. Obviously the 2016 chip was abandoned in favour of Scorpio. Can you tell us anything about that?
I don't think that this makes too much sense, but hey it is crazy speculation time :) Apparently the Hololens people will be at Mobile World Congress again next month even though they already revealed/announced Hololens 2 last year at this same event. Maybe some type of consumer iteration of this technology for PC and Xbox use?
This is the Phil quote, right? There were two different *plans* - a 2016 version and a 2017 version. Two chip designs. But two completely different chips were never developed.
That's worth way more then two cents.
As always really appreciate all of the insight.
Sony doesn't want to or need to play the power game if they don't want.
They likely had a target price and one that has done amazingly for them and design for it. I've been harping this for months.
A one console sku targeting $599 nearly killed them. They came back with a $399 that did amazingly well.
Do you really want to be in a board meeting at Sony trying to justify why you need a one sku launch trying to be the most powerful just so the internet likes you considering it's history and how critical it is to their survival?
That is an incredibly hard sell unless you had a cheaper option...
They want power at a reasonable price to accelerate adoption. That's the name of the game. IMO they will design an elegant and powerful console at $399 with the specs to match
This is the Phil quote, right? There were two different *plans* - a 2016 version and a 2017 version. Two chip designs. But two completely different chips were never developed.
I was actually on IGN and talked a bit more about this too. The team started thinking about the idea of Scorpio back before the original Xbox One launched in 2013. I think I presented the first plan in either late 2012 or early 2013. There were different options for different years, starting with a 2016 version, and as Phil said he pushed the team and we came up with a 2017 version. But there was never a 2016 chip created.
But why go through all of the iterations and testing to make a separate APU for the cloud when you could just use the one you are putting in your actual console? Especially if it is going to be close to the same physical size (a 9 TF chip and a 12 TF chip are not going to have massively different die size with RDNA). You incur huge costs to design the chip, and then have to mass produce 2 different chips. It doesn't make a ton of sense.Yep. I get a weird deja vu in here about every 24 hours... and before you know it, there's a new OT...
and 24 hours later, we're back to discussing what probably derailed the last OT.
Im not a hardware engineer.
But I would like to know from anyone that is.
Oberon keeps being brought up.
Explain this to me.
Microsoft is building Xcloud with Xbox One innards on server racks. Eventually they plan to integrate Lockhart into those server racks instead to run Xbox One, Lockhart games. Yes/No?
So now the mind bender...
What if Oberon is PS Now related and will be used as future server innards for PS4 games to run Native 4K? Mark Cerny has stated you need 8TF for native 4K. What if because Jaguar has hit a brick wall in development, Sony wants Oberon to be the heart of its replacement. With Zen. And maybe even the same Oberon hardware to run all past Playstation Generation games also. (PS1,2,3 and 4 by emulation)
The heart of that argument is...
If this ends up being the case, Oberon has nothing to do with PS5. and this thread can stop chasing it's tail every 24 hours stating the Github leak is PS5 final spec as a certainty. Especially when Matt has stated it isn't the case.
This thread can be insufferable at times...
It seems like any theory other than the narrative that the Null Hypothesis is Oberon is a 9.2TFlops PS5 is shot down without any discussion.
Haha, and what a pastebin it was.....
This is a few months old but I find Remedy's comment about the PS5 SSD's fast streaming freeing up CPU bandwidth to be very interesting.
Remedy Explains What Control Will Be Like On PS5 - RespawnFirst
Control has already released on the current generation consoles but it is also coming to PS5 and Xbox Scarlett. Here is what the game is going to be like.respawnfirst.com
I can't wait to hear more about it from devs!
Sony building in a blitter
seriously though if you have streaming heavy engines, and your SSD can serve you 5GB/s which may need at least some decompression and sorting into ram - is that heavy enough you'd need to factor in for bandwidth contention and even CPU? Seems it might be. These consoles might benefit from some 'super onion' type of Bus arrangement to not choke when GPU/COU and SSD are all vying for bandwidth
Digital Foundry also confirmed that was about right with one of their sources, and the die size being basically 400mm^2 means its probably right around that unless there is some other secret sauce taking up all that space (esram anyone? haha)