Uh why?Love the numbers, at only a $50 difference between arcade and PS5, arcade would be DOA in a brutal fashion
Uh why?Love the numbers, at only a $50 difference between arcade and PS5, arcade would be DOA in a brutal fashion
Love the numbers, at only a $50 difference between arcade and PS5, arcade would be DOA in a brutal fashion
There is the possibility that Navi has more than 64CUs, lets say 80-96CUs for "Navi 10" and 104-128CUs for "Navi 20".AMD Gonzalo ( allegedly PS5) has Navi 10 lite. What would "lite" mean in this case,i guess some stuff will be cut off from full Navi 10?
that user is saying he would love if the numbers would be so high, however they feel that it will hurt lockhart's sales.Wait, you love the numbers because the arcade would be DOA? Or is that a separate thought? Regardless, I think those specs with that price difference would turn out fine for them. Game changing? Don't know, don't care but DOA seems like crazy hyperbole.
that user is saying he would love if the numbers would be so high, however they feel that it will hurt lockhart's sales.
that user is saying he would love if the numbers would be so high, however they feel that it will hurt lockhart's sales.
Thank you. The specs across the board I would love. The price for the arcade makes no sense
About ~15% faster iso clock. Alpha drivers, unoptimized benchmark, etc.
For sure - XBOX One X ran at 1172 st 16nm. It wouldn't be a big deal for it to run at 1500 at 7nm.
It is, but it's educated. Vega VII boosts to 1800MHz and Turing goes near that high already at 12nm. If Navi is the least bit more efficient than Vega, you're at 2GHz.How does he know Navi desktop GPU wuld run at ~2Ghz?? Sounds like pure speculation to me...
Until Richard puts out a new video, his range was 11-15 teraflops. Tempering our expectations still puts us in the double-digits.
If his expectations had dropped from 11 to 8, it seems like he would have put out a new video making a solid case for that, it's not like putting out that type of video wouldn't generate a lot of buzz.
It is, but it's educated. Vega VII boosts to 1800MHz and Turing goes near that high already at 12nm. If Navi is the least bit more efficient than Vega, you're at 2GHz.
If the rumors are true that Sony originally planned to have PS5 out for Holiday 2019, but pushed that out 1 year to Holiday 2020 (and that would not have been a recent move, but more like early or mid 2017), then I hope that provides a boost to final performance of PS5, even if we never get to know what happened documented.
There is the possibility that Navi has more than 64CUs, lets say 80-96CUs for "Navi 10" and 104-128CUs for "Navi 20".
So having a cut down Navi10 design for APUs makes sense. 60-72CUs most likely, if full-fat Navi10 is 80-96CUs.
Though there is a small possibility that they might lower the amount of unified shaders per CU from 64 to 48.
If this happens, we'll need 72CUs@1450MHz to reach 10.02TFlops.
Not if they moved to 48 unified shaders per CU.
They might scale down the "shader engines" slightly, this might be one of the things they'll do to improve the scalability. So instead of 4 "normal shader engines", they'll go for 6 or 8 slightly smaller shader engines.
It's far more common for GPUs to keep clock speed increases small to nonexistent across manufacturing nodes while focusing on increasing parallel execution units. Thermal constraints are your ultimate limiting factor, and you invariably gain more by adding CUs than bumping clock speeds within the same thermal budget.
All subject to architectural limits and manufacturability, of course, so we'll see what the right balance is for Navi ... but I'd be shocked if they pushed clock speeds. That's the game you play when you've hit architectural limits or exploitable parallelism and have no other way to improve performance.
so... i know insiders over here told us to ignore that user, but i could not help it and looked over at osiris's post again in the old forum about his PS5 "leak",
it sounded pretty crazy, but now i don't know anymore.
first, it accurately predicted the CPU speed from the gonzalo leak at 3.2GHZ.
but then the second part sounded unbelieveable, a 2.1GHZ GPU. even NVidia's fastest GPUs do not get that speed i think, but now we have komachi saying they think Navi 10 will be around that range... what if its real?
that last part also sounded very weird, 18GB GDDR5.
I just noticed this,he mentioned it few days ago:
Ariel is PS5,Arden is the next Xbox-one of the SKUs..."internal GPU"...interesting.
Yes, that's what it means in this context. It doesn't tell us whether it's monolithic or chiplet, however.
Yes, that's what it means in this context. It doesn't tell us whether it's monolithic or chiplet, however.
I just noticed this,he mentioned it few days ago:
Ariel is PS5,Arden is the next Xbox-one of the SKUs..."internal GPU"...interesting.
Pure speculation: how about Arden is high end Anaconda with full discrete GPU (no APU) so MS is certain when they say they will have the most powerful console next gen...?
PLEASE don't use anything to do with Blue Nugroho! He is Mister X Media's "Technical Gentleman" otherwise known as "mistercteam"...
PLEASE don't use anything to do with Blue Nugroho! He is Mister X Media's "Technical Gentleman" otherwise known as "mistercteam"...
Mmm, someone high up at AMD? Maybe less than 100 highly paid engineers and their managers. Possibly c-suite at EA/AB/Ubi, etc.Besides... How many people are there in the world that know not just MS's specs and pricing strategy, but also Sony's?!
Yup. Trace resistivity is not scaling, so IO becomes more expensive. Intel tried to innovate here on 10nm with Cobalt but other vendors have said Cu is still good until 7nm. The issue is the buildup layer around the Cu to combat electromigration is not as conductive as the Cu itself.I was reading an article the other day that highlighted something about chip design - GPUs, especially - that I hadn't really thought a lot about: the power (and heat) cost of moving data around. Per article, the energetic cost of shunting stuff around has been decreasing slower than the power used by the logic itself, while datapaths have been increasing in width and length. The net result is that a proportionally larger part of the power budget is being consumed not by logic, but just by moving data around.
So, in thesis, what you want to do is to keep data as local as possible when working on it. And that's one area where something like Super SIMD will shine, because it will make it possible to queue a bunch of operations on some data that will be executed sequentially without the data ever leaving the ALU at all - and, even further, since ALUs are paired, you can have one feeding the other directly, say, one running a multiply and the other evaluating the result, with no round trips. That would mean that Super SIMD is not only more efficient from a workload perspective, but also from an energetic perspective.
Also makes me wonder if the current logic arrangement for AMD GPUs isn't partially to blame for suboptimal power usage.
Do you have any link of osiris's post?so... i know insiders over here told us to ignore that user, but i could not help it and looked over at osiris's post again in the old forum about his PS5 "leak",
it sounded pretty crazy, but now i don't know anymore.
first, it accurately predicted the CPU speed from the gonzalo leak at 3.2GHZ.
but then the second part sounded unbelieveable, a 2.1GHZ GPU. even NVidia's fastest GPUs do not get that speed i think, but now we have komachi saying they think Navi 10 will be around that range... what if its real?
that last part also sounded very weird, 18GB GDDR5.
I just noticed this,he mentioned it few days ago:
Ariel is PS5,Arden is the next Xbox-one of the SKUs..."internal GPU"...interesting.
Pure speculation: how about Arden is high end Anaconda with full discrete GPU (no APU) so MS is certain when they say they will have the most powerful console next gen...?
GDDR5 is not really weird since they can get almost 1tb/s of bandwidth by using a 512 bit bus. I'd imagine procuring GDDR5 would be far cheaper than GDDR6 or GDDR5x but then again, that assumption could be wrong.so... i know insiders over here told us to ignore that user, but i could not help it and looked over at osiris's post again in the old forum about his PS5 "leak",
it sounded pretty crazy, but now i don't know anymore.
first, it accurately predicted the CPU speed from the gonzalo leak at 3.2GHZ.
but then the second part sounded unbelieveable, a 2.1GHZ GPU. even NVidia's fastest GPUs do not get that speed i think, but now we have komachi saying they think Navi 10 will be around that range... what if its real?
that last part also sounded very weird, 18GB GDDR5.
I was reading an article the other day that highlighted something about chip design - GPUs, especially - that I hadn't really thought a lot about: the power (and heat) cost of moving data around. Per article, the energetic cost of shunting stuff around has been decreasing slower than the power used by the logic itself, while datapaths have been increasing in width and length. The net result is that a proportionally larger part of the power budget is being consumed not by logic, but just by moving data around.
So, in thesis, what you want to do is to keep data as local as possible when working on it. And that's one area where something like Super SIMD will shine, because it will make it possible to queue a bunch of operations on some data that will be executed sequentially without the data ever leaving the ALU at all - and, even further, since ALUs are paired, you can have one feeding the other directly, say, one running a multiply and the other evaluating the result, with no round trips. That would mean that Super SIMD is not only more efficient from a workload perspective, but also from an energetic perspective.
Also makes me wonder if the current logic arrangement for AMD GPUs isn't partially to blame for suboptimal power usage.
Please delete everything you get from this idiot blue whatever ... He is the guy came up with the idea that the Xbox One has 2 hidden 2nd GPU.
I am surprised that I can see his tweets here as I have blocked this guy on twitter to not see the nonsense he is usually spreading ...Yeah,Putty just told me,i had no idea who that is.
Btw,how is your new power hungry toy?
GDDR5 is not really weird since they can get almost 1tb/s of bandwidth by using a 512 bit bus. I'd imagine procuring GDDR5 would be far cheaper than GDDR6 or GDDR5x but then again, that assumption could be wrong.
I wonder how long current gen consoles will still be supported, it will be a shame for games that could work on the mid gens don't come to them.
Also I hope next gen any Bluetooth headphones or earbuds will work with the console.
Also I hope they both can turn on my TV when I turn on my console like the xbox one does now.
He said 8tf is needed for this gen games to run properly on 4k(Although 6TF xbox1x is doing just fine on 4k).For next gen we need true next gen graphical fidelity,not just current game running at 4k with almost same graphics.
I just don't see 8TF in PS5. Xbox1x has 6 tf with old 16nm Polaris.Navi is probably next big thing for AMD after Ryzen,its also 7nm,honestly everything below 8TF would be disapppointing.
I expect 72CU running at 1300mhz which is 12TF as double the power of Xbox1X
Yup. Trace resistivity is not scaling, so IO becomes more expensive. Intel tried to innovate here on 10nm with Cobalt but other vendors have said Cu is still good until 7nm. The issue is the buildup layer around the Cu to combat electromigration is not as conductive as the Cu itself.
The spirit of your post is somewhat reflected in the latest Cerny patent. It repeatedly talks about manipulating data in local caches. The Turing architecture also upped cache sizes from Pascal.
https://patents.justia.com/patent/20190035050
The general idea I'm getting from reading some of the hardware prediction posts is the next Xbox (high end model) will be the most powerful console of next gen. Is there a reason for this based on leaks or is it just what a lot of people expect?
Forgive me I haven't been following the next gen hardware news as much recently but did he explicitly say it'll be the most powerful console next gen?Because Phil said so.
No, really, the reasoning behind that is that Phil said so.