There's nothing nebulous about what I described. This process has resulted in a real console that exists today - the Xbox One X. Console hardware will remain fixed, I'm not saying otherwise. Both Sony and Microsoft currently have two (fixed) hardware specs, and it seems to make sense just fine for them, and recent rumors suggest Nintendo may join them.
The process you describe is entirely incompatible with the process to develop a next-gen games console and I'm baffled you're suggesting this. It just points to a lack of understanding of MS's XB1X development process on your part, I'm afraid.
Their entire process is predicated on the fact that XB1X is a mid-gen refresh of the XB1, with a plethora of games released to be able to profile against their hardware simulations.
You can't run next-gen game code on a simulation of next-gen hardware when there are no next-gen games available to run.
What you're suggesting is absurd.
I'm not going to argue that a lower spec'd machine will have lower capability, that's obvious. And I don't really know if launching with two SKUs right off the bat is better or worse than just one - I do think it's the necessary move for MS. My argument is that if both skus are designed in tandem with specific goals in mind for capability with the one variable being their target being resolution, it's possible for them to spec the machines to have a high probability of hitting those targets.
This statement means nothing. You're saying if MS designs a console against a target spec. they will hit that target?... You're not saying anything, and it doesn't have anything to do with the argument about a lower spec. console holding back next-gen games, which it will, and I've demonstrated this in my previous posts.
I'm arguing that MS will use data generated from game profiling to simulate next gen games and next gen hardware. Then they'll build their Scarlett platforms based on that data. They've already done this, I fully expect them to do it again.
What next-gen games is MS going to profile when there are no next-gen games 2-3 yrs prior to launch during their next-gen hardware development?
You don't seem to grasp the basics of what MS did with XB1X and why. They simulated XB1X hardware and profiled games to understand how XB1X would run XB1 games before any silicon tapes out. It's something AMD/NVidia/Intel will do routinely when designing new hardware. It's not unique nor revolutionary.
Next-gen consoles are different because until developers get devkits and the console is close to launch, there isn't any next-gen software to profile.
You're putting the cart before the horse, through a distinct lack of understanding.
No, it's nothing like the PC Platform. I am talking about the process Xbox engineers will use to select and customize the hardware for their next generation consoles. The console hardware is only constant once a spec is locked down, during the design phase all aspects of components are likely to be adjusted to create the optimum machine.
If your suggested solution to the low spec console holding back development isn't a PC like model of software that dictates hardware requirements (rather than the converse) then whatever you're trying to suggest makes even less sense and I don't understand what you're trying to argue.
From
Digital Foundry:
I'm saying is this same approach can and will be used for their next gen platforms. The target is more nebulous, but they can still get meaningful data and insight from developers to create that target. Games are different as you said, and each one may have unique bottlenecks in hardware, this approach actually looks at the bottlenecks then hardware can be optimized to mitigate them. It's a more efficient way to design because it's based on data gathered and analyzed before hardware is fabricated. The final spec would be determined by the configuration most capable of meeting the design targets for a given BOM. There's always going to be compromise, that's an inherent part of consoles in general, the goal would be to minimize their impact. I suggest looking up how engineers in various disciplines utilize DOEs (design of experiments) in their design process.
I suggest you exercise even a little bit of critical thinking on what you're trying to argue. What data can MS use to design a next-gen console? They can't time-travel and profile released next-gen games on their simulated hardware years into the gen.
I don't understand how you can't see what you're suggesting doesn't make any sense.
Again, this process works. The Xbox One X exists. I'd be interested to hear you explain what you think is the best process for a console maker in choosing the specs and customizations they want for their new console.
The Xbox 1 X isn't a next-gen console. It's a fundamental difference that's baffling to me how you cannot seem to see it.
The XB1X is a hardware platform built to play EXISTING software for an EXISTING hardware platform. Their next-gen console will be a NEW hardware platform built to play games that at the time of hardware development DON'T EVEN EXIST YET.
In the past Sony and Microsoft have sought feedback from devs on future console hardware, and they will continue to do so. Nothing is different in this regard. During the development of the PS4,
Mark Cerny flew to devs around the world to get feedback on next gen before the hardware was finalized.
Seeking feedback on new console development isn't the same as profiling commercial game code on a simulation of a new hardware platform. You're making a weird false equivalency and conflating things that aren't even remotely the same thing.
Beyond the Ryzen CPU, the switch of Next-generation console hardware baselines to include an SSD would offer arguably the biggest improvements to Next-gen games.
Shorter load times are just a minor side benefit. If the (comparatively) slow data transfer rate of a standard HDD is mitigated, then future games can designed around this new SSD baseline.
This is not simply shorter load times- you will be able to GREATLY increase open world games scene complexity, texture and animation variety and tons more in ways that wouldn't be possible with a standard HDD.
The ability to no longer have such a compromised data bottleneck for streaming in game assets will change the way games are designed *IF* (big if) it is standard on all Next-gen consoles.
We already know Google Stadia will use an SSD for it's baseline hardware instance so the gauntlet has already been thrown down.
I think posts like this are VASTLY overstating the benefits, TBH.
RAM capacity and bandwidth will have a MUCH bigger impact on those factors than the transfer speed of the mass storage device will. Functionally, there's little a full SSD can provide that a fast flash cache + HDD can't (provided your cache is large enough to overcome the HDD to flash streaming bottleneck).
The CPU and GPU can only process a given amount of data in unit time and this is influenced far far more greatly by the main memory bandwidth. The RAM capacity if large enough overcomes any streaming bottleneck of the mass storage device and the introduction of a HBCC-managed flash cache makes the problem go away entirely.
Lower transfer speed lower down the memory hierarchy can always be offset by having a bigger data buffer.
So a 128/256GB flash cache plus 24GB RAM pool with high enough main memory bandwidth will provide all the benefits of an SSD at a cheaper price-point and with the benefit of being able to use a HDD or SSD for the user-replaceable mass storage device.
If you mandate a full SSD across all next-gen consoles, any allowance for user-added external drive would undermine the point of going full SSD and mean there would be no difference between that and a flash cache setup, as the game installed on the external drive would still be limited by the slow external (possibly HDD) drive transfer speeds.