About 8.9 TF. Which would be around 12 Vega TF. Doesn't mean a whole lot though because we don't know much about Navi and comparing Nvidia and amd in regards to TF isn't very helpful.
Cool. this is good and a reflief to know.I think that you got the TF and clock speeds mixed up. X1 is 1000Mhz in the Shield but only 768MHz on the Switch while docked and 384MHz in mobile mode so that's why it is 0.39TF while docked and 0.19TF in mobile mode.
Using the GPU for compute instead of per-pixel work is common but usually takes only a fraction of the GPU. For instance, in Horizon most of the compute work they dedicate the GPU for is used in order to do the dynamic cloud system. Guerrilla invested 2ms of GPU time for that (slide #95), that's 6% of the GPU time. So as you can see, even though compute for things like voxel-based effects is used by developers, it's a small percentage of the GPU while most of the GPU is used for pixel-based calculations that scale with resolution. I've already done a pretty detailed post with benchmarks that show that most modern engines still scale almost 1:1 with the resolution, maybe someone can dig it up and link to it because it's in the previous thread and I'm having a hard time finding it.
I'll also add just in case that RT is also pixel-based because someone always jumps and yells "But RT!" :)
Most of the things that you are talking about are CPU work, not GPU. If the CPU is the same, we have no NPC problem.
All these things are not because of the lowest common denominator, they are because of the highest. PS4 was the baseline for building games, XBO got a slight downgrade and PC a slight upgrade but games were built for PS4. You can't code a lighting solution that consoles can't use. I mean I'm sure they can, but who wants a 600p game on PS4? So UE4 dropped it. In the end, you have a console that aims at a certain resolution if PS4 was a 720p machine than watchdogs and UE4 wouldn't have been downgraded. And that's the point, TFs correlate to the resolution almost 1:1 so if you want to render a game in 4K with a 12TF GPU, you will get the same ball-park performance from a 6TF GPU if you are trying to run the game at 1440p. All you need is enough buffer and the rumors talk about a huge buffer. If Lockheart was 3TF and Anaconda 12TF than it might be a bit hard because of cases like 4Kcb. But half the power for 1/4 the resolution? Man, that's a huge buffer, I doubt if developers ever have to downgrade a single game after lowering the resolution to 1080p, games might actually run better on Lockheart.
Yes :)
It's not always 1:1 so half the TF might run half the resolution in 25fps instead of 30fps, but that's why Lockheart needs a buffer so it won't be 1:1 and that's also why developers might need to do a bit of optimization in edge cases. When you drop resolution you can lower LOD, texture size, effect resolution, reflection resolution, shadow map resolution, etc. But generally speaking, if you cut the TF by half and cut resolution by half you will get the same ball-park performance (again, if someone can find my original post about that).
Did he ever promise that Microsoft was going to bring VR at the reveal or did he say that the next step change must be capable of delivering high fidelity VR?I just did? Or do you think the Scorpio reveal was forced upon him and he had no saying in it?
If that is true, and Sony is using Navi 10 lite, maybe Ananconda will be using a true RDNA chip.
Or maybe the will both use the same, don't want to add to the conosle war in here.
No way there's going to be 3 skus! We've only heard about two and it's going to be a streaming box and a premium box.
I just did? Or do you think the Scorpio reveal was forced upon him and he had no saying in it?
Navi is gddr6. I thought the HBM rumor would've been smashed when that came out.There was actually an extremely technical and detailed rumor that they're going with HBM + DDR4 combination as they got an amazing deal for HBM. We'll see, you can't take anything for granted at this point really.
We can try!Indeed. But is there any chance we can derive the number of stream processors from the die size? (estimation)
Or do we believe the GPU sports 3072 stream processors?
Navi is gddr6. I thought the HBM rumor would've been smashed when that came out.
Navi is gddr6. I thought the HBM rumor would've been smashed when that came out.
It's a good analogy for how different architectures with the same TF number perform differently but not really in the Anaconda/Lockheart case. I mean, I'm not a car guy so I'm not sure about it but if you have the same exact cars but you've doubled just the HP for one of them, wouldn't it be 2X more powerful?It's rare to read a post on ERA where someone actually uses an analogy to explain something that is a REALLY good analogy. Nicely put, Kaiju. You sir, are a scholar.
your words are not the source. i need a link to the material. does that not make sense? like i said, should be easy for you since you remember it vividly.I just did? Or do you think the Scorpio reveal was forced upon him and he had no saying in it?
your words are not the source. i need a link to the material. does that not make sense? like i said, should be easy for you since you remember it vividly.
give me the source that explicitly has him saying he is promising VR. should be easy for you.
On stage at the Xbox briefing, Spencer called out VR as another big feature of Scorpio, but gave no details as to how you'd jump into virtual reality through your Xbox. (Bethesda's creative director Todd Howard, however, did appear in a video to say that the VR version of Fallout 4 that his company is developing would come to Project Scorpio.)
"When we went out and talked to VR developers," Spencer says, "the capability and the hardware spec that they need to deliver a console-like experience to VR was a requirement of 6 teraflops, which clearly, today's consoles—PlayStation 4 and Xbox One—don't have."
{snip}
"The best place for VR innovation is the PC," Spencer says. "I think developers should still go focus on the PC, because I think that's a great place to innovate. What we're doing… is we're able to take some of the PC innovation that we see… and bring it to the console space, to enable those magical experiences on Scorpio when it launches."
I'd say this fits the bill?
It's a bit annoying when someone screams "Source" but can't do a google search.
It's a good analogy for how different architectures with the same TF number perform differently but not really in the Anaconda/Lockheart case. I mean, I'm not a car guy so I'm not sure about it but if you have the same exact cars but you've doubled just the HP for one of them, wouldn't it be 2X more powerful?
because i don't really care and wanted the poster to back up his claim of "liar" instead of taking his word for it, twice.But how about doing some google research on your own if you asked for "source"? It's not that hard, really
because i don't really care and wanted the poster to back up his claim of "liar" instead of taking his word for it, twice.
Saw this in the Navi announcement thread.
It looks like Navi is some sort of hybrid of GCN and AMD's next gen arch.
I don't really like that car metaphor because I just don't know enough about cars :)That's stretching the analogy quite a bit, there :D if you double the power, it will be, theoretically, twice as "powerful" - I mean, yeah - but it won't be twice as fast.
One thing that bothers me about the idea that GPU loads scale linearly-ish, so you just quarter the res and you're good, is that the linear scaling holds true when scaling UP, because you're mostly taking effects that scale with resolution and scaling them up. That's not necessarily true for loads scaling down. It is if the thing is designed with scaling down in mind, but if you're designing something that does a lot of res-independent stuff (voxelized lighting pipelines, for instance), that isn't scaling down as neatly as you'd imagine.
You mentioned Horizon and its use of compute, but you have to keep in mind that was a game that was very much designed with the low tier GPU in mind and scaled up; they'd only dedicate as much compute to that as they knew they'd have after doing everything that actually scales with res, likely.
That has nothing to do with anything. Sony can use whatever RAM works best for them.
Or maybe Ana will be using Vega because true Navi is coming in 2020.
I love this thread so much.
Leak that Navi is awesome:
- Sony fanboys: Navi was developed by Sony. MS won't get it! Moneyhatted architecture!
- XBOX fanboys: Bullshit - XBOX is Navi too plus some next gen shit. 3D chess!
Leak that Navi is shit:
- Sony fanboys: Everyone is getting Navi!
- XBOX fanboys: Navi sucks, we're on the mega custom Vega next level shit!
Both consoles are the same spec:
- Sony fanboys: All along I've said it's the games that matter. Who gives a shit about flops. 20% is nothing.
- XBOX fanboys: We're buying all the studios. Game pass is life.
It could be a hybrid because AMD is transitioning to a compute focused product in conjunction with a gaming focused one. GCN will live on, but Navi needs to have compute capabilities to have Instinct iterations in the meantime.I wonder if this is because the full RDNA GPUs are not ready yet and current Navi is a stopgap... or are we seeing the "Navi was created for Sony" rumors in full effect here.. or maybe it's both. It kinda would make sense if Sony asked for this to make full PS4 BC much a much simpler endeavor...
GDDR6 works best for them, because it doesn't cost a bazillion dollars :P
Almost certainly, Ana will be using NextGen not vega, or at least a hybrid Navi-NextGenOr maybe Ana will be using Vega because true Navi is coming in 2020.
Yeah I know, no value at all just like defending everything from Ms/Sony/Nintendo etc...You're not adding value with this comment. It makes you seem like a system warrior.
You guys dont really have to respond to every console war posts tbh.
It's a waste of time. Just play Slay the Spire like I am doing now.As a parent it comes natural to correct foolish behavior. I want my little children to grow up right. 😂
i think you missed the post that was a reply to.
If that is true, and Sony is using Navi 10 lite, maybe Ananconda will be using a true RDNA chip.
Or maybe the will both use the same, don't want to add to the conosle war in here.
You're not adding value with this comment. It makes you seem like a system warrior.
No way there's going to be 3 skus! We've only heard about two and it's going to be a streaming box and a premium box.
It could be a hybrid because AMD is transitioning to a compute focused product in conjunction with a gaming focused one. GCN will live on, but Navi needs to have compute capabilities to have Instinct iterations in the meantime.
Great write up. I am still in the 11TF camp.... though I can't shake this feeling that the PS5 will be 10TF. It just sounds right somehow, but I expect ana to be 11TF though.We can try!
We have two unknown variables unfortunately. Clocks and CU count. For reference, I'll be using Strange Brigade benchmarks from here.
The knowns are:
RTX 2070 + 10% performance in Strange Brigade at 4K. This puts it within a few % of Vega 64, so let's call them equal for simplicity's sake.
Architecture gain of 1.25x per clock based on a suite of 30 benchmarks at 4K. This is a good comparison because it's more likely to stress any memory bandwidth disparities.
Perf/Watt gain of 1.5x over GCN at 14nm. I'll assume this is Vega 64 and immediately discard the metric. Why? Because we already know Vega 20 enjoys a 1.25x perf/Watt boost over Vega 64, so this is AMD admitting Navi is running at some clock where there are no additional perf/Watt advantages.
I think we should assume a minimum of 40 CUs based on the various leaks, and no more than 52 based on AdoredTV's numbers.
Vega 64 has a 1250MHz base clock and 1550MHz boost. To draw equal, Navi must make up any CU deficiencies not overcome with the 1.25X factor by clocks. This boosts 40CUs to an effective 50, meaning a 64/50 ratio boost to clocks. 1,600MHz base clock. 1984MHz boost clock. These don't seem totally far fetched given AMD says Navi clocks better, and Nvidia designs can clock that high.
44CUs: 1450MHz base, 1800MHz boost.
48CUs: 1333MHz base, 1650MHz boost.
52CUs: 1250MHz base, 1550MHz boost. (No change)
What's also interesting in my mind is CU sizing. If CUs have grown a lot, it really speaks to a lot of architecture rework. Vega 20 fits 64 CUs and a 4096 bit HBM2 controller in 332mm^2. I think it has more negative die area than strictly needed, and my personal belief is because this could have been a hurried refresh as a pipe cleaner, as well as current and power density meant it couldn't be shrunk further due to IR loss and heat density concerns. Navi is sporting a 255mm^2 die.
Navi has a 256-bit GDDR6 interface, and we know per 128 bits, it's 1.5-1.75x larger than a 2048 bit HBM2 interface. Since both are double in this case, let's assume the worst case and that their areas are rough equal, rather than GDDR6 being 20-25% smaller. I do this because I assume Navi will have less negative die area.
That means the rest of the area should be roughly equal, and so we can do an approximate CU sizing.
40 CUs: Navi CUs are 23% larger than Vega 20.
44 CUs: 12%
48 CUs: 2%
52 CUs: -5%
In any event, 255mm^2 is a good sign for consoles being able to include a full Navi 10 die along with 70mm^2 Zen 2 design, with some spare room for glue logic and misc. IO. If that leaked dev kit PS5 rumor is true (312mm^2), we're clearly dealing with a cut down Navi 10 (or a LITE version with smaller CUs?)
Which outcome is better for consoles? I would argue the smaller CU is actually better for consoles, because it makes the clock situation a lot more favorable. I think the 52CU scenario is infeasible because there's no way AMD would market a GPU with clocks that low and make the statements they did. I think we are likely looking at 44CUs for Navi based on it giving us a 1800MHz boost, which lines up perfectly with Radeon 7, and gives up the 0% perf/Watt advantage of Navi over Vega 20 that we expect. Of course, 40CUs is the best case if you believe the Gonzalo leak, because it tells us that console GPU clocks are actually 10% under desktop GPU boost clocks.
RX 580 clocks are 1257MHz/1340MHz, which means Xbox One X comes within 7% of desktop base clocks and 13% of boost clocks, and so I think Gonzalo clocks are completely believable for a 40CU Navi with ~2000MHz boost clock. They are far-fetched for anything beyond 44CUs.
And drum roll, teraflop time!
Given the clocks are scaled based on CU count, all above configurations have the same metrics. 8.2TF bass, 10.1TF boost. This puts us right in the TF band we expect for consoles (if a bit on the low side). I suspect the RX 5700 is not the top end Navi SKU though (I would expect an 8 or 9 in the name), and there's probably a full die version with 4-8 more CUs enabled, meaning all the above calculations are going to move up. Given consoles will most likely disable CUs for yield, this may absolutely still be a comparable situation. Conclusion, I remain team 10TF, but they punch like 12.5TF.
It's a bit annoying when someone screams "Source" but can't do a google search.
But how about doing some google research on your own if you asked for "source"? It's not that hard, really
I wonder if this is because the full RDNA GPUs are not ready yet and current Navi is a stopgap... or are we seeing the "Navi was created for Sony" rumors in full effect here.. or maybe it's both. It kinda would make sense if Sony asked for this to make full PS4 BC much a much simpler endeavor...
lol?
You don't need 4-6 TF to stream
Defeats the entire point of a streaming box.
$99/$299/$499...
Streaming box is low cost to bring in customers. this way they are hitting all categories.
I don't think that there is a streaming SKU. There are two new xbox SKUs and an xcloud app. If you don't have anything connected to your TV then they will probably sell something similar to chrome-cast, a streaming stick.So there's a 1080p streaming box, a 1080p "casual" box, and a 4k "hardcore" box? And here I was thinking pulling 2 SKUs off would be a tough one for MS.
So there's a 1080p streaming box, a 1080p "casual" box, and a 4k "hardcore" box? And here I was thinking pulling 2 SKUs off would be a tough one for MS.
I don't think that there is a streaming SKU. There are two new xbox SKUs and an xcloud app. If you don't have anything connected to your TV then they will probably sell something similar to chrome-cast, a streaming stick.
The onus of proof always falls to the person making the claims.
I'm personally expecting an Apple TV, Roku, smart TV app for all platforms honestly, MS is on every platform and App Store right now with their products, should be easy for them to deploy that app to as many services as possible, as long as the can have controller support of course! Unless they release some kind of Wi-Fi controller like Stadia.I don't think that there is a streaming SKU. There are two new xbox SKUs and an xcloud app. If you don't have anything connected to your TV then they will probably sell something similar to chrome-cast, a streaming stick.
Who said the streams are bound to 1080p?
Resolution isn't the differentiator, it's the streaming. Streaming vs native. Also saying casual and hardcore makes no sense. Games are going to look the same. It's the TV that changes. You can be a hardcore gamer and not have a 4K set. I dunno why people try to classify how dedicated someone is to playing games by basing it off the amount of money they spend.
Casual audiences will be split between stream and lockhart. Hardcore audiences will be split between all 3, depending on their needs.
I would say aligning with set tops is better, but it's true streaming would be more flexible in the delivery options. They'll probably have both.