• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.
  • We have made minor adjustments to how the search bar works on ResetEra. You can read about the changes here.

When will the first 'next gen' console be revealed?

  • First half of 2019

    Votes: 593 15.6%
  • Second half of 2019(let's say post E3)

    Votes: 1,361 35.9%
  • First half of 2020

    Votes: 1,675 44.2%
  • 2021 :^)

    Votes: 161 4.2%

  • Total voters
    3,790
  • Poll closed .
Status
Not open for further replies.

Jaypah

Member
Oct 27, 2017
2,866
Love the numbers, at only a $50 difference between arcade and PS5, arcade would be DOA in a brutal fashion

Wait, you love the numbers because the arcade would be DOA? Or is that a separate thought? Regardless, I think those specs with that price difference would turn out fine for them. Game changing? Don't know, don't care but DOA seems like crazy hyperbole.
 

Bloodcore

Member
Mar 24, 2018
137
AMD Gonzalo ( allegedly PS5) has Navi 10 lite. What would "lite" mean in this case,i guess some stuff will be cut off from full Navi 10?
There is the possibility that Navi has more than 64CUs, lets say 80-96CUs for "Navi 10" and 104-128CUs for "Navi 20".
So having a cut down Navi10 design for APUs makes sense. 60-72CUs most likely, if full-fat Navi10 is 80-96CUs.

Though there is a small possibility that they might lower the amount of unified shaders per CU from 64 to 48.
If this happens, we'll need 72CUs@1450MHz to reach 10.02TFlops.
 
Last edited:

modiz

Member
Oct 8, 2018
17,844
Wait, you love the numbers because the arcade would be DOA? Or is that a separate thought? Regardless, I think those specs with that price difference would turn out fine for them. Game changing? Don't know, don't care but DOA seems like crazy hyperbole.
that user is saying he would love if the numbers would be so high, however they feel that it will hurt lockhart's sales.
 

Deleted member 40133

User requested account closure
Banned
Feb 19, 2018
6,095

For less than the price of a game you get a better system. For launch day buyers, $100 is the increment where people start to do the pros and cons about what to get. But $50 is nothing when you're already spending $350. It's like spending $500 000 on a house and getting squirelly over $2000 for hardwood in the basement. It doesn't make any sense. The 1s is routinely cheaper, it is weaker and even has a 4k BluRay player. It is always outsold by the ps4 slim. Bar the 4k, it is the exact situation you are outlining here. Cheaper and weaker....and yet it is outsold. When you especially consider the whole point of that SKU is to drive volume sales, $350 makes no sense. $300 and it's a debate.
 

Jaypah

Member
Oct 27, 2017
2,866
that user is saying he would love if the numbers would be so high, however they feel that it will hurt lockhart's sales.
Thank you. The specs across the board I would love. The price for the arcade makes no sense

Got it. I still think DOA is hyperbole and theatrics but that's our community; you're either number one or you're dead and buried. Sony will be going into next gen with even more momentum than they had going into PS4 but MS isn't about to fuck up every single possible part of the Scarlett launch like they did last time. So while I expect Sony to lead in sales from the gate I don't think that an Xbox Arcade specced/priced like that would be considered a failure or DOA.
 

Lady Gaia

Member
Oct 27, 2017
2,479
Seattle
For sure - XBOX One X ran at 1172 st 16nm. It wouldn't be a big deal for it to run at 1500 at 7nm.

It's far more common for GPUs to keep clock speed increases small to nonexistent across manufacturing nodes while focusing on increasing parallel execution units. Thermal constraints are your ultimate limiting factor, and you invariably gain more by adding CUs than bumping clock speeds within the same thermal budget.

All subject to architectural limits and manufacturability, of course, so we'll see what the right balance is for Navi ... but I'd be shocked if they pushed clock speeds. That's the game you play when you've hit architectural limits or exploitable parallelism and have no other way to improve performance.
 

SharpX68K

Member
Nov 10, 2017
10,518
Chicagoland
Until Richard puts out a new video, his range was 11-15 teraflops. Tempering our expectations still puts us in the double-digits.

If his expectations had dropped from 11 to 8, it seems like he would have put out a new video making a solid case for that, it's not like putting out that type of video wouldn't generate a lot of buzz.

I fully agree with that. I'm going with what Richard on DF / Eurogamer has said, until a new video or article comes out with different expectations.

Anyway, I just cannot imagine a PS5 in 2020 (even at $399) with 2 more TFlops (8TF) and 4GB more RAM (16GB) than Xbox One X had in 2017 (even tho that was $499).

Sony knows full well what they did to Microsoft in 2013, and have to be expecting MS to come out with guns blazing in 2020. They'd have to be more than blind and stupid to come out with a PS5 that has the perception of underwhelming and disappointing.

If the rumors are true that Sony originally planned to have PS5 out for Holiday 2019, but pushed that out 1 year to Holiday 2020 (and that would not have been a recent move, but more like early or mid 2017), then I hope that provides a boost to final performance of PS5, even if we never get to know what happened documented.
 

Deleted member 17092

User requested account closure
Banned
Oct 27, 2017
20,360
It is, but it's educated. Vega VII boosts to 1800MHz and Turing goes near that high already at 12nm. If Navi is the least bit more efficient than Vega, you're at 2GHz.

Pascal and Turing both do 2000mhz relatively easily. Nvidia arch has been hitting much higher clocks than AMD for a long time now, but AMD has managed to somewhat keep up outside of nvidias halo chips despite the lower clocks (not really, as catching nvidias x80 series chips has meant for AMD huge 400nm+ dies and a lot of flops vs smaller Nvidia mid-range dies with lower flops). 980 tis could hit 1500mhz back in 2015. Since Pascal though nvidia has been somewhat stuck at 2000mhz, so it will be interesting to see what kind of clocks AMD can indeed achieve with a non Vega chip/possibly not GCN architecture.
 

VX1

Member
Oct 28, 2017
7,000
Europe
If the rumors are true that Sony originally planned to have PS5 out for Holiday 2019, but pushed that out 1 year to Holiday 2020 (and that would not have been a recent move, but more like early or mid 2017), then I hope that provides a boost to final performance of PS5, even if we never get to know what happened documented.

But how big boost?If PS5 was planned for Q4 2019 ,and maybe delayed for whatever reason, that means PS5 specs were locked long time ago.What can they possibly change after that with ~1 year delay?Only realistic option i see is to raise CPU/GPU clocks a bit,like MS did with 1X that launched 1 year after Pro.But that also required expensive cooling solution and i highly doubt Sony would use something like that in $399 box.
 

msia2k75

Member
Nov 1, 2017
601
There is the possibility that Navi has more than 64CUs, lets say 80-96CUs for "Navi 10" and 104-128CUs for "Navi 20".
So having a cut down Navi10 design for APUs makes sense. 60-72CUs most likely, if full-fat Navi10 is 80-96CUs.

Though there is a small possibility that they might lower the amount of unified shaders per CU from 64 to 48.
If this happens, we'll need 72CUs@1450MHz to reach 10.02TFlops.

No, no at all. [email protected] -> 13,3TF.
 

Bloodcore

Member
Mar 24, 2018
137
Why something like that would happen?
They might scale down the "shader engines" slightly, this might be one of the things they'll do to improve the scalability. So instead of 4 "normal shader engines", they'll go for 6 or 8 slightly smaller shader engines.

This would also make them easier to feed data, which is one of the issues that prevented them from going above 4 engines in the past. (according to an interview.)
 
Last edited:

modiz

Member
Oct 8, 2018
17,844
so... i know insiders over here told us to ignore that user, but i could not help it and looked over at osiris's post again in the old forum about his PS5 "leak",
it sounded pretty crazy, but now i don't know anymore.
first, it accurately predicted the CPU speed from the gonzalo leak at 3.2GHZ.
but then the second part sounded unbelieveable, a 2.1GHZ GPU. even NVidia's fastest GPUs do not get that speed i think, but now we have komachi saying they think Navi 10 will be around that range... what if its real?
that last part also sounded very weird, 18GB GDDR5.
 
Oct 27, 2017
4,018
Florida
It's far more common for GPUs to keep clock speed increases small to nonexistent across manufacturing nodes while focusing on increasing parallel execution units. Thermal constraints are your ultimate limiting factor, and you invariably gain more by adding CUs than bumping clock speeds within the same thermal budget.

All subject to architectural limits and manufacturability, of course, so we'll see what the right balance is for Navi ... but I'd be shocked if they pushed clock speeds. That's the game you play when you've hit architectural limits or exploitable parallelism and have no other way to improve performance.

True - we'll have to see if AMD can push past the 64CU limit on GCN. So many questions remain unanswered right now until they outline Navi.
 

Deleted member 40133

User requested account closure
Banned
Feb 19, 2018
6,095
so... i know insiders over here told us to ignore that user, but i could not help it and looked over at osiris's post again in the old forum about his PS5 "leak",
it sounded pretty crazy, but now i don't know anymore.
first, it accurately predicted the CPU speed from the gonzalo leak at 3.2GHZ.
but then the second part sounded unbelieveable, a 2.1GHZ GPU. even NVidia's fastest GPUs do not get that speed i think, but now we have komachi saying they think Navi 10 will be around that range... what if its real?
that last part also sounded very weird, 18GB GDDR5.

I'm basically with you on everything except the GPU. Even the 1tb SSD sounded crazy until we found out about the plans MS has. The Navi 10 clocks are most likely desktop, which are not going to happen in console
 

Pheonix

Banned
Dec 14, 2018
5,990
St Kitts
I am Team 12TF. With a twist.

PS5 ($500 console sold for $399)
CPU Ryzen 2 8C(16T)@3.2Ghz
GPU 72CU [email protected]
RAM 16GB GDDR6 @ 512GB/s + 4GB LPDDR4
STORAGE 240GB embedded Nand flash + 2TB HDD

XBA ($500+ console sold for $399)
CPU Ryzen 2 8C(16T)@3.5Ghz
GPU 72CU [email protected]
RAM 16GB GDDR6 @ 512GB/s + 4GB - 8GB LPDDR4
STORAGE 240GB embedded Nand flash + 2TB HDD

XBL ($400 console sold for $299 the 1080p/1440p console)
CPU Ryzen 2 8C(16T)@2.5Ghz
GPU 56CU 7TF@1Ghz
RAM 12GB GDDR6 @ 384GB/s + 4GB LPDDR4
STORAGE 120GB embedded Nand flash + 1TB HDD
 

Putty

Double Eleven
Verified
Oct 27, 2017
931
Middlesbrough
I just noticed this,he mentioned it few days ago:



Ariel is PS5,Arden is the next Xbox-one of the SKUs..."internal GPU"...interesting.

Pure speculation: how about Arden is high end Anaconda with full discrete GPU (no APU) so MS is certain when they say they will have the most powerful console next gen...?


PLEASE don't use anything to do with Blue Nugroho! He is Mister X Media's "Technical Gentleman" otherwise known as "mistercteam"...
 

Deleted member 40133

User requested account closure
Banned
Feb 19, 2018
6,095
PLEASE don't use anything to do with Blue Nugroho! He is Mister X Media's "Technical Gentleman" otherwise known as "mistercteam"...

Good to know, thanks! I have a question. Is their a number teraflop wise that studios in general seem to be expecting? Or even want? I completely understand if you can't answer.
 
Oct 27, 2017
7,139
Somewhere South
I was reading an article the other day that highlighted something about chip design - GPUs, especially - that I hadn't really thought a lot about: the power (and heat) cost of moving data around. Per article, the energetic cost of shunting stuff around has been decreasing slower than the power used by the logic itself, while datapaths have been increasing in width and length. The net result is that a proportionally larger part of the power budget is being consumed not by logic, but just by moving data around.

So, in thesis, what you want to do is to keep data as local as possible when working on it. And that's one area where something like Super SIMD will shine, because it will make it possible to queue a bunch of operations on some data that will be executed sequentially without the data ever leaving the ALU at all - and, even further, since ALUs are paired, you can have one feeding the other directly, say, one running a multiply and the other evaluating the result, with no round trips. That would mean that Super SIMD is not only more efficient from a workload perspective, but also from an energetic perspective.

Also makes me wonder if the current logic arrangement for AMD GPUs isn't partially to blame for suboptimal power usage.
 

anexanhume

Member
Oct 25, 2017
12,913
Maryland
I was reading an article the other day that highlighted something about chip design - GPUs, especially - that I hadn't really thought a lot about: the power (and heat) cost of moving data around. Per article, the energetic cost of shunting stuff around has been decreasing slower than the power used by the logic itself, while datapaths have been increasing in width and length. The net result is that a proportionally larger part of the power budget is being consumed not by logic, but just by moving data around.

So, in thesis, what you want to do is to keep data as local as possible when working on it. And that's one area where something like Super SIMD will shine, because it will make it possible to queue a bunch of operations on some data that will be executed sequentially without the data ever leaving the ALU at all - and, even further, since ALUs are paired, you can have one feeding the other directly, say, one running a multiply and the other evaluating the result, with no round trips. That would mean that Super SIMD is not only more efficient from a workload perspective, but also from an energetic perspective.

Also makes me wonder if the current logic arrangement for AMD GPUs isn't partially to blame for suboptimal power usage.
Yup. Trace resistivity is not scaling, so IO becomes more expensive. Intel tried to innovate here on 10nm with Cobalt but other vendors have said Cu is still good until 7nm. The issue is the buildup layer around the Cu to combat electromigration is not as conductive as the Cu itself.

The spirit of your post is somewhat reflected in the latest Cerny patent. It repeatedly talks about manipulating data in local caches. The Turing architecture also upped cache sizes from Pascal.

https://patents.justia.com/patent/20190035050
 
Last edited:

RevengeTaken

Banned
Aug 12, 2018
1,711
so... i know insiders over here told us to ignore that user, but i could not help it and looked over at osiris's post again in the old forum about his PS5 "leak",
it sounded pretty crazy, but now i don't know anymore.
first, it accurately predicted the CPU speed from the gonzalo leak at 3.2GHZ.
but then the second part sounded unbelieveable, a 2.1GHZ GPU. even NVidia's fastest GPUs do not get that speed i think, but now we have komachi saying they think Navi 10 will be around that range... what if its real?
that last part also sounded very weird, 18GB GDDR5.
Do you have any link of osiris's post?
 

Deleted member 12635

User requested account closure
Banned
Oct 27, 2017
6,198
Germany
I just noticed this,he mentioned it few days ago:



Ariel is PS5,Arden is the next Xbox-one of the SKUs..."internal GPU"...interesting.

Pure speculation: how about Arden is high end Anaconda with full discrete GPU (no APU) so MS is certain when they say they will have the most powerful console next gen...?

Please delete everything you get from this idiot blue whatever ... He is the guy came up with the idea that the Xbox One has a hidden 2nd GPU.
 
Last edited:

Deleted member 34239

User requested account closure
Banned
Nov 24, 2017
1,154
so... i know insiders over here told us to ignore that user, but i could not help it and looked over at osiris's post again in the old forum about his PS5 "leak",
it sounded pretty crazy, but now i don't know anymore.
first, it accurately predicted the CPU speed from the gonzalo leak at 3.2GHZ.
but then the second part sounded unbelieveable, a 2.1GHZ GPU. even NVidia's fastest GPUs do not get that speed i think, but now we have komachi saying they think Navi 10 will be around that range... what if its real?
that last part also sounded very weird, 18GB GDDR5.
GDDR5 is not really weird since they can get almost 1tb/s of bandwidth by using a 512 bit bus. I'd imagine procuring GDDR5 would be far cheaper than GDDR6 or GDDR5x but then again, that assumption could be wrong.
 

M3rcy

Member
Oct 27, 2017
702
I was reading an article the other day that highlighted something about chip design - GPUs, especially - that I hadn't really thought a lot about: the power (and heat) cost of moving data around. Per article, the energetic cost of shunting stuff around has been decreasing slower than the power used by the logic itself, while datapaths have been increasing in width and length. The net result is that a proportionally larger part of the power budget is being consumed not by logic, but just by moving data around.

So, in thesis, what you want to do is to keep data as local as possible when working on it. And that's one area where something like Super SIMD will shine, because it will make it possible to queue a bunch of operations on some data that will be executed sequentially without the data ever leaving the ALU at all - and, even further, since ALUs are paired, you can have one feeding the other directly, say, one running a multiply and the other evaluating the result, with no round trips. That would mean that Super SIMD is not only more efficient from a workload perspective, but also from an energetic perspective.

Also makes me wonder if the current logic arrangement for AMD GPUs isn't partially to blame for suboptimal power usage.

That all makes a lot of sense.
 
Feb 10, 2018
17,534
I wonder how long current gen consoles will still be supported, it will be a shame for games that could work on the mid gens don't come to them.

Also I hope next gen any Bluetooth headphones or earbuds will work with the console.
Also I hope they both can turn on my TV when I turn on my console like the xbox one does now.
 

vivftp

Member
Oct 29, 2017
19,763
I wonder how long current gen consoles will still be supported, it will be a shame for games that could work on the mid gens don't come to them.

Also I hope next gen any Bluetooth headphones or earbuds will work with the console.
Also I hope they both can turn on my TV when I turn on my console like the xbox one does now.

Well Sony likes to have their 10 year plans, so I wouldn't be surprised to see the PS4 still receive some titles up until around then. But I'm sure both companies will push the incentive of playing their next gen only titles via cloud streaming.
 
Feb 8, 2018
2,570
He said 8tf is needed for this gen games to run properly on 4k(Although 6TF xbox1x is doing just fine on 4k).For next gen we need true next gen graphical fidelity,not just current game running at 4k with almost same graphics.

I just don't see 8TF in PS5. Xbox1x has 6 tf with old 16nm Polaris.Navi is probably next big thing for AMD after Ryzen,its also 7nm,honestly everything below 8TF would be disapppointing.

I expect 72CU running at 1300mhz which is 12TF as double the power of Xbox1X

could be that 8tf was an overestimation then. a first party studio can get more out of 8TF than other studios. I agree that on paper it doesn't sound like a big jump from Scorpio. There would definitely be a visible difference but likely not big enough.
 
Oct 27, 2017
7,139
Somewhere South
Yup. Trace resistivity is not scaling, so IO becomes more expensive. Intel tried to innovate here on 10nm with Cobalt but other vendors have said Cu is still good until 7nm. The issue is the buildup layer around the Cu to combat electromigration is not as conductive as the Cu itself.

Oh, I see. Thanks for the explanation, it makes total sense. I had read about Intel going for Cobalt for the traces and didn't really get the reasoning behind that (and also recall reading it didn't go so well for them, they had lots of issues with that, like failures due to thermal cycling).

The spirit of your post is somewhat reflected in the latest Cerny patent. It repeatedly talks about manipulating data in local caches. The Turing architecture also upped cache sizes from Pascal.

https://patents.justia.com/patent/20190035050

Oh, shiny. Hadn't seen this latest patent yet. Gotta take some time to read and digest it, looks dense.

Also, cool to have some validation that my reasoning isn't completely busted :D
 

'V'

Banned
May 19, 2018
772
The general idea I'm getting from reading some of the hardware prediction posts is the next Xbox (high end model) will be the most powerful console of next gen. Is there a reason for this based on leaks or is it just what a lot of people expect?
 
Status
Not open for further replies.