I just asked where was the miss information on the post you replied to.
Done with this one, it ain't just interpolation and extrapolation. You said that, it isn't correct, that's the misinformation.
We don't know exactly how the prediction will work, so I would calm down, first investigate on what might be happening as shown on the videos I shared and then comment on that before jumping to conclusions. You are declaring that dozens of variations will be simulated on each frame, but I would like to know from where are you getting this information, as this comment is not made on the interview?
Of course that is how the prediction works, it can't just take one guess and render one frame back like a local console does. The prediction tech will whittle down a list of possible machine learnt outcomes, once the player input(s) are parsed the ready to go latency saving frame is sent. This is of course the methodology of how the tech works. The technical specifics we're not privileged to as yet but the method of how this works is around in other non gaming tech etc.
Of course there are dozens of variations, take a simple list of possible inputs or combination of inputs on a given frame or prediction interval (as I don't believe the will do this for every single frame) and you'll see it is variations and a small pooled list of which ones to prerender. Take a moment in Halo for example. Player decides to move left, shoot and jump all at the same frame/interval timing. Now the variation for prediction is that player could have stayed where they were, could have moved right, could have crouched, could have swapped weapons, could have moved forward or backward etc.
The Stadia prediction will rule out unlikely ones and not prerender those. It will prerender a "short list" of variations, once the input is received or the AI decides to send the stream frame down then it cycles again and again in a continuous dance of what variations to render and which one to send, rinse repeat.
So it's rather moot what you say, this is the basis of the AI/machine learning/predictions.
Let's say that they just simulate you pressing the shooting button as seen on the Outatime video I shared. Only that and nothing else. Right there it would mean that the Stadia server would have a frame on queue ready to be delivered the second the input from the client's side arrives. That right there would be an example of negative latency as defined by Google. Right now you are simply jumping to conclusions and are refuting several things that are not even part of the Interview made to the Stadia engineer. Don't know why put a non supported location of the world as an example.
See above, They can't just render one prediction over one simple input. They have to decide which variations to render then decide which one to actually send. This isn't going to be a 100% solution. Think of it as dynamic resolution solutions, predictions are going to sometimes be correct and save latency round trips/time and sometimes they're going to be incorrect and have to render as normal and extend that latency round trip/time.
There is room for developer decisions here e.g. they could simply overrule player inputs to appear to have a smooth/real time game, They could choose to have latency/lag as they correct predictions mistakes, they may have a sync system in place to keep things ticking along.
Here's a dumbed down version for ya - you watch a YouTube video and their server and your local computer cache/download what you are watching ahead of time so it's smooth and high quality. There is also bit rate and resolution jumps up and down as things slow down or speed up. Now introduce variations that you can change the video you are watching any given frame and YT is going to predict which you will be inputting and watching any given frame or interval. Gaming introduces that level of variations of inputs and context while you play games such as fighting or FPS games.
Where is this 10 times the processing power comment comes from? And 10 times what thing? An Xbox One, PS4, 10K Gaming PC? Where are you getting this information?
From the quote by the Google dev themselves -
Google Stadia will be faster and more responsive than local gaming systems in "a year or two," according to VP of engineering Madj Bakar. Thanks to some precog trickery, Google believes its streaming system will be faster than the gaming systems of the near-future, no matter how powerful they may become. But if the system is playing itself, does that really count?
Speaking with Alex Wiltshire in Edge magazine #338, Google's top streaming engineer claims the company is verging on gaming superiority with its cloud streaming service, Stadia, thanks to the advancements it's making in modelling and machine learning. It's even eyeing up the gaming performance crown in just a couple of years.
So they have to render a shortlist of variations in game, this means they have to render multiple variations of the game on the fly and repeat that ongoing. This requires more processing power than a local hardware version rendering just one game world at per frame.