• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.
  • We have made minor adjustments to how the search bar works on ResetEra. You can read about the changes here.

Dunlop

Member
Oct 25, 2017
8,511
Okay, that's instantly way shittier, 14ms for the fastest one.. :(

edit: Then i read this

"Why is "ping time to Google" (or packet drop rate) not a good measure of the performance of Google services?"


Still don't understand anything though lol
As others have mentioned, that is a really good response time, mine varies between 8 - 15 and there is a Data Center in my city
 

Alucardx23

Member
Nov 8, 2017
4,716
As others have mentioned, that is a really good response time, mine varies between 8 - 15 and there is a Data Center in my city

Holy shit. You're a lucky guy. I get around 40ms doing a test for GeForce Now and 70ms for the Google test. I don't live in a supported location, hope that Starlink can help with that in my case.
 

Dunlop

Member
Oct 25, 2017
8,511
Holy shit. You're a lucky guy. I get around 40ms doing a test for GeForce Now and 70ms for the Google test. I don't live in a supported location, hope that Starlink can help with that in my case.
It's why I really want to test the technology out : ) I have `1Gbps unlimited connection

On a side note, I have a couple of days to accept a new job offer and if I do that one will require me to frequently travel to the US so I will get to try the other end of the spectrum...crappy hotel wifi lol
 

Hzsn724

Member
Nov 10, 2017
1,767
I'm the same, launch PS4 + 1 year of PS+. So $460 in total. But I've also been putting games on hold since after GoW to play on PS5 instead, I don't want to play inferior versions on my old PS4. And offline only for most of the gen. So, it's been cheap but not without downsides.

Personally I think the free Stadia Base is the most interesting, you don't need to pay for online gaming and the hardware will be evolving over time, you just buy the games and play. Sure you only get 1080p 60fps but that's still an upgrade in many cases. And honestly, 4K is the most overhyped and resources wasting tech ever, especially for gaming in a living room setup where the longer distance to the TV will decrease the meaning of the higher resolution.
If I can get 60fps on my internet then I wouldn't mind that at all. I'm running about 30mbps (rural college town) and I'm afraid that I'd be dipping below 30fps. I'm not against this future, but I just don't think it's ready yet, and I for sure don't want to be one of the first that deals with the headaches of not being able to play the $60 game I just bought. Big reason I don't play on PC that much.

The industry is also not ready for an all streaming future. If they were then Nintendo, Nextbox and PS5 would be doing it more than they are. Sony is dabbling with ps now, Xbox is looking into cloud gaming, and Nintendo does some weird magic with the NES and SNES catalog, but those games are pretty small to stream (if that's how they do it at all). Might just check for the licence or whatever. But yea i won't say never, just not yet.
 

Deleted member 20284

User requested account closure
Banned
Oct 28, 2017
2,889
Interpolation and extrapolation are not the same thing as this at all. Why would you even equate the two?

Why would you even limit to just those two?

There is far more prediction at play to get negative latency or the same latency as local hardware e.g. user inputs, graphics prerendering, audio cues, movement prediction, gunplay, enemies movement, action/reaction, pathing, physics, animation. You go ahead and enjoy a game where AI predicts what you're going to do before you do it just to give you the appearance of a real time streaming game. I'll stick with the local hardware for as long and as much as possible where I can, given current network/cloud usage as it's already a blurred line.

Don't think you even read the article mate -

This can include increasing fps rapidly to reduce latency between player input and display, or even predicting user inputs.

Yes, you heard that correctly. Stadia might start predicting what action, button, or movement you're likely to do next and render it ready for you – which sounds rather frightening.

But by all means keep shouting misinformation from your soapbox.
 

Baked Pigeon

Banned
Oct 27, 2017
7,087
Phoenix
If Google can develop "negative latency" than why the heck don't we have these geniuses trying to solve real problems in the world. Seems like a waste of intelligence.
 

Alucardx23

Member
Nov 8, 2017
4,716
Why would you even limit to just those two?

There is far more prediction at play to get negative latency or the same latency as local hardware e.g. user inputs, graphics prerendering, audio cues, movement prediction, gunplay, enemies movement, action/reaction, pathing, physics, animation. You go ahead and enjoy a game where AI predicts what you're going to do before you do it just to give you the appearance of a real time streaming game. I'll stick with the local hardware for as long and as much as possible where I can, given current network/cloud usage as it's already a blurred line.

Don't think you even read the article mate -



But by all means keep shouting misinformation from your soapbox.

Where is the misinformation? Can you please explain?
 

Hailinel

Shamed a mod for a tag
Member
Oct 27, 2017
35,527
Just...lol.

Negative latency can't exist. You can't have less than zero latency, and even absolutely zero latency over a network is impossible.
 

Musubi

Unshakable Resolve - Prophet of Truth
Member
Oct 25, 2017
23,748
Just...lol.

Negative latency can't exist. You can't have less than zero latency, and even absolutely zero latency over a network is impossible.

Its a dumb marketing buzzword. I'd imagine what actually happens with this is they feed all the input data from users playing games into a learning A.I. that can recognize and identify how people play each game thus allowing it to pre-fetch commands.
 

Hailinel

Shamed a mod for a tag
Member
Oct 27, 2017
35,527
Its a dumb marketing buzzword. I'd imagine what actually happens with this is they feed all the input data from users playing games into a learning A.I. that can recognize and identify how people play each game thus allowing it to pre-fetch commands.
Yeah. Even so, that'll make fighting games hilarious to play.
 

Jedi2016

Member
Oct 27, 2017
15,903
They'll have it in a year or two, they say... so what happens when the product is killed off in nine months?
 

Alucardx23

Member
Nov 8, 2017
4,716
Yeah. Even so, that'll make fighting games hilarious to play.

Correct me if I'm wrong, but Somehow I'm getting the feeling that you're understanding this as the game making a wrong prediction and still sending the wrong frame afterwards. This is not like you pressed jump, the game predicted walk, so you walk instead of jump.
 

blitzblake

Banned
Jan 4, 2018
3,171
Is this guy saying that with machine learning, the cloud will be able to predict what you're going to do and have it queued up ready to process before you've even pressed the button?
 

brainchild

Independent Developer
Verified
Nov 25, 2017
9,484
Just...lol.

Negative latency can't exist. You can't have less than zero latency, and even absolutely zero latency over a network is impossible.

That's like saying that pre-rendering content is impossible.

The algorithms they use are predictive and process data for inputs that haven't actually occurred on the client side yet, so in that sense, yes, negative latency is possible because the algorithms determine the outcome before it even happens.

If 0 latency is an input being processed on the server side at the same exact time said input was sent out from the client side, then logically, the server processing data for the input BEFORE it was sent from the client side would be negative latency.

Is this guy saying that with machine learning, the cloud will be able to predict what you're going to do and have it queued up ready to process before you've even pressed the button?

Essentially, yes.
 

Deleted member 20284

User requested account closure
Banned
Oct 28, 2017
2,889
Where is the misinformation? Can you please explain?

Snarky reply - I guess you need to extrapolate it from my previous reply.

Honest reply - Here's the rub - It's not just interpolation of graphics or extrapolation of a player simply moving forward. Stadia devs are saying they are going to predict the likely outcomes of what a player may do before they do it then render/process/calculate a range of preemptive variations but only deliver the one needed, ready to go off the shelf, once the gamer decides the input, perhaps even prior to the input being made.

It's bullshit, taking even just digital inputs and not analog sensitivity that a player on any given input can make dozens of choices for. So Stadia devs further claim the highest level of processing with Stadia over say Xbox or PS5 next gen because of the cloud? Bullshit again. They are not going to render dozens of variations every single frame, they are not going to deliver that on every frame render and then extrapolation and interpolation techniques are the ending results smoothing over a myriad of developer/processing choices being made via prediction of the player based on AI or machine learning.

It's a great system on paper but there is no fucking way they're rendering enough variations per frame in say FPS games like Halo or Apex and having "negative latency" in direct comparison to local hardware on a real world Internet connection from say Australia, where Stadia isn't even launching. It's a fucking pipe dream and this dev is deluded or a snake oil salesmen.

I love the tech concept and yes it's worthy of pushing further but don't sell it like this. It's not comparable. Don't honestly try and tell me Stadia is going to provide 10 times the processing power per gamer over a next gen console at home to deliver on negative latency and get it anywhere near 75% correct on every frame or preemptive interval they decide.
 

low-G

Member
Oct 25, 2017
8,144
That's like saying that pre-rendering content is impossible.

The algorithms they use are predictive and process data for inputs that haven't actually occurred on the client side yet, so in that sense, yes, negative latency is possible because the algorithms determine the outcome before it even happens.

If 0 latency is an input being processed on the server side at the same exact time said input was sent out from the client side, then logically, the server processing data for the input BEFORE it was sent from the client side would be negative latency.



Essentially, yes.

So how are they dealing with the overhead to revert states when they get prediction wrong 99% of the time?

Even games with rewind functionality build from the ground up don't get it 100% right, and they don't have the capacity to just dump in 8GB of RAM in a microsecond.
 

GameAddict411

Member
Oct 26, 2017
8,577
As an electrical engineering student , they are full of shit. There is something called propagation delay. It's limited by many factors like medium, but the greatest limitation is the speed of light. So no matter what they do, this will always be limiting factor to responsiveness.
 

Alucardx23

Member
Nov 8, 2017
4,716
Snarky reply - I guess you need to extrapolate it from my previous reply.

I just asked where was the miss information on the post you replied to.

Honest reply - Here's the rub - It's not just interpolation of graphics or extrapolation of a player simply moving forward. Stadia devs are saying they are going to predict the likely outcomes of what a player may do before they do it then render/process/calculate a range of preemptive variations but only deliver the one needed, ready to go off the shelf, once the gamer decides the input, perhaps even prior to the input being made.

Yes, something like that is demonstrated on the video below.




On this Digital Foundry video they also talk about what they might be doing.




It's bullshit, taking even just digital inputs and not analog sensitivity that a player on any given input can make dozens of choices for. So Stadia devs further claim the highest level of processing with Stadia over say Xbox or PS5 next gen because of the cloud? Bullshit again. They are not going to render dozens of variations every single frame, they are not going to deliver that on every frame render and then extrapolation and interpolation techniques are the ending results smoothing over a myriad of developer/processing choices being made via prediction of the player based on AI or machine learning.

We don't know exactly how the prediction will work, so I would calm down, first investigate on what might be happening as shown on the videos I shared and then comment on that before jumping to conclusions. You are declaring that dozens of variations will be simulated on each frame, but I would like to know from where are you getting this information, as this comment is not made on the interview?

It's a great system on paper but there is no fucking way they're rendering enough variations per frame in say FPS games like Halo or Apex and having "negative latency" in direct comparison to local hardware on a real world Internet connection from say Australia, where Stadia isn't even launching. It's a fucking pipe dream and this dev is deluded or a snake oil salesmen.

Let's say that they just simulate you pressing the shooting button as seen on the Outatime video I shared. Only that and nothing else. Right there it would mean that the Stadia server would have a frame on queue ready to be delivered the second the input from the client's side arrives. That right there would be an example of negative latency as defined by Google. Right now you are simply jumping to conclusions and are refuting several things that are not even part of the Interview made to the Stadia engineer. Don't know why put a non supported location of the world as an example.

I love the tech concept and yes it's worthy of pushing further but don't sell it like this. It's not comparable. Don't honestly try and tell me Stadia is going to provide 10 times the processing power per gamer over a next gen console at home to deliver on negative latency and get it anywhere near 75% correct on every frame or preemptive interval they decide.

Where is this 10 times the processing power comment comes from? And 10 times what thing? An Xbox One, PS4, 10K Gaming PC? Where are you getting this information?
 
Last edited:

low-G

Member
Oct 25, 2017
8,144
I just asked where was the miss information on the post you replied to.



Yes, something like that is demonstrated on the video below.



On this Digital Foundry video they also talk about what they might be doing.






We don't know exactly how the prediction will work, so I would calm down, first investigate on what might be happening as shown on the videos I shared and then comment on that before jumping to conclusions. You are declaring that dozens of variations will be simulated on each frame, but I would like to know from where are you getting this information, as this comment is not made on the interview?



Let's say that they just simulate you pressing the shooting button as seen on the Outatime video I shared. Only that and nothing else. Right there it would mean that the Stadia server would have a frame on queue ready to be delivered the second the input from the client's side arrives. That right there would be an example of negative latency as defined by Google. Right now you are simply jumping to conclusions and are refuting several things that are not even part of the Interview made to the Stadia engineer. Don't know why put a non supported location of the world as an example.



Where is this 10 times the processing power comment comes from? And 10 times what thing? An Xbox One, PS4, 10K Gaming PC? Where are you getting this information?


Where are you getting your misinformation from? Tell me again how you're rendering something like Control with RTX effects, and you're rendering at >60fps, and you have multiple instances, and when you predict wrong you have a $40 million 16GB of L1 cache per user to simply restore the state so you can continue predicting future frames.

It's very viable tech for games that can run on a Raspberry Pi 1. But nobody has answers to the above questions of mine because it's literally impossible to resolve.
 

Alucardx23

Member
Nov 8, 2017
4,716
Where are you getting your misinformation from? Tell me again how you're rendering something like Control with RTX effects, and you're rendering at >60fps, and you have multiple instances, and when you predict wrong you have a $40 million 16GB of L1 cache per user to simply restore the state so you can continue predicting future frames.

I have posted where I get my information. Where are you getting yours? Have you taken a couple of minutes of your precious time to see it? 40 million 16GB L1 cache per user? WTF are you talking about?

 

low-G

Member
Oct 25, 2017
8,144
I have posted where I get my information. Where are you getting yours? Have you taken a couple of minutes of your precious time to see it? 40 million 16GB L1 cache per user? WTF are you talking about?

I'm talking about how games actually work. Not a bunch of speculation, marketing, and tech demos running 15 year old games.

Explain to me how you're reloading the state in Control. Explain to me how you're taking known input and rendering out 5 more frames in 16ms, and then reloading the state to -5 frames and rendering out a correct 10 frames in 16ms again when you get a different input from the user.

Tell me how they could ever conceivably do any of what I just said is required to fulfill the tech you describe.
 

Alucardx23

Member
Nov 8, 2017
4,716
I'm talking about how games actually work. Not a bunch of speculation, marketing, and tech demos running 15 year old games.

Explain to me how you're reloading the state in Control. Explain to me how you're taking known input and rendering out 5 more frames in 16ms, and then reloading the state to -5 frames and rendering out a correct 10 frames in 16ms again when you get a different input from the user.

Tell me how they could ever conceivably do any of what I just said is required to fulfill the tech you describe.

Ooooh, now I remember you. You where talking how it would be "extremely unlikely" that the technique below would be used on a streaming service and here we are just a few months later. I wonder where we will be in a couple of months from now as the information continues to come out.




This is what I was saying at the time referring to the Outatime video.

"Just concentrate on the fact that there is a solution already tested that shows that it is posible to use frame Speculation. I didn't say Google was using this technique exactly, but they might be using something similar, if not more kudos to them since it seems they are handling latency very good. I can bet you my life that as streaming services become more popular you will see innovative solutions like Outatime Speculation. You can also check the link below for another example of what might be used to reduce latency."

I'm sorry bro, but you are a big waste of time. It is extremely obvious that someone like you wont dedicate a single second of your time to actually learn and analyze the information that has been shared here. Take the time to reflect and actually view the content that I shared. That is a good place to start.
 
Last edited:

brainchild

Independent Developer
Verified
Nov 25, 2017
9,484
I'm talking about how games actually work. Not a bunch of speculation, marketing, and tech demos running 15 year old games.

Explain to me how you're reloading the state in Control. Explain to me how you're taking known input and rendering out 5 more frames in 16ms, and then reloading the state to -5 frames and rendering out a correct 10 frames in 16ms again when you get a different input from the user.

Tell me how they could ever conceivably do any of what I just said is required to fulfill the tech you describe.

After seeing you just summarily dismiss Alucardx23's post as "speculation" (for the record 'speculative execution' actually exists, it's not just a theoretical concept) I'm not even going to waste my time with you.

It's obvious that you will simply ignore the evidence even if it slaps you in the face because you're convinced that you couldn't possibly be wrong about your assumptions.

I'm sorry bro, but you are a big waste of time. It is extremely obvious that someone like you wont dedicate a single second of their time to actually learn and analyze the information that has been shared here. Take the time to reflect and actually view the content that I shared. That is a good place to start.

Completely agree.
 

jotun?

Member
Oct 28, 2017
4,520
Just...lol.

Negative latency can't exist. You can't have less than zero latency, and even absolutely zero latency over a network is impossible.
Like a lot of the OP article, it's kind of a bad choice of words. I think the idea is that it's "negative" because before you even press a button, your client has the corresponding result frame (among other possibilities) available.

You could also say that the "negative" claim is relative to using traditional local hardware. Normally when you press a button, the game has to take various steps to process it and then render the result. In some games there's a significant processing delay between giving an input and seeing the output. But if this works as advertised, there's potentially a lot less the client has to do between a user input and displaying the result, since most of the processing was already done predictively by the server.
 

low-G

Member
Oct 25, 2017
8,144
After seeing you just summarily dismiss Alucardx23's post as "speculation" (for the record 'speculative execution' actually exists, it's not just a theoretical concept) I'm not even going to waste my time with you.

It's obvious that you will simply ignore the evidence even if it slaps you in the face because you're convinced that you couldn't possibly be wrong about your assumptions.



Completely agree.

Astoundingly ignorant and dismissive comment from you 'brainchild'. You're a fool and you don't even remotely understand this tech. You can't even give a single counterpoint to a single one of the absolutely showstoppers I have raised.

This. Tech. Doesn't. Work. And. You. Are. A. Sucker.
 

Alucardx23

Member
Nov 8, 2017
4,716
Like a lot of the OP article, it's kind of a bad choice of words. I think the idea is that it's "negative" because before you even press a button, your client has the corresponding result frame (among other possibilities) available.

You could also say that the "negative" claim is relative to using traditional local hardware. Normally when you press a button, the game has to take various steps to process it and then render the result. In some games there's a significant processing delay between giving an input and seeing the output. But if this works as advertised, there's potentially a lot less the client has to do between a user input and displaying the result, since most of the processing was already done predictively by the server.

You see Dunlop, there is hope in the world. Someone that actually took the time to read and analyze what might be happening here. :D
 

brainchild

Independent Developer
Verified
Nov 25, 2017
9,484
Astoundingly ignorant and dismissive comment from you 'brainchild'. You're a fool and you don't even remotely understand this tech. You can't even give a single counterpoint to a single one of the absolutely showstoppers I have raised.

This. Tech. Doesn't. Work. And. You. Are. A. Sucker.

This Trumpian level take certainly gave me a chuckle. You have a wonderful day/night.
 

Alucardx23

Member
Nov 8, 2017
4,716
Astoundingly ignorant and dismissive comment from you 'brainchild'. You're a fool and you don't even remotely understand this tech. You can't even give a single counterpoint to a single one of the absolutely showstoppers I have raised.

This. Tech. Doesn't. Work. And. You. Are. A. Sucker.

9Ht.gif
 

Spinluck

▲ Legend ▲
Avenger
Oct 26, 2017
28,611
Chicago
Love when ERA does a YouTube comments impression and ignores interesting tech explanations to get a quick fuck you in.

Oh wait a minute, no I don't. Be better than this.
 

Deleted member 20284

User requested account closure
Banned
Oct 28, 2017
2,889
I just asked where was the miss information on the post you replied to.

Done with this one, it ain't just interpolation and extrapolation. You said that, it isn't correct, that's the misinformation.

We don't know exactly how the prediction will work, so I would calm down, first investigate on what might be happening as shown on the videos I shared and then comment on that before jumping to conclusions. You are declaring that dozens of variations will be simulated on each frame, but I would like to know from where are you getting this information, as this comment is not made on the interview?

Of course that is how the prediction works, it can't just take one guess and render one frame back like a local console does. The prediction tech will whittle down a list of possible machine learnt outcomes, once the player input(s) are parsed the ready to go latency saving frame is sent. This is of course the methodology of how the tech works. The technical specifics we're not privileged to as yet but the method of how this works is around in other non gaming tech etc.

Of course there are dozens of variations, take a simple list of possible inputs or combination of inputs on a given frame or prediction interval (as I don't believe the will do this for every single frame) and you'll see it is variations and a small pooled list of which ones to prerender. Take a moment in Halo for example. Player decides to move left, shoot and jump all at the same frame/interval timing. Now the variation for prediction is that player could have stayed where they were, could have moved right, could have crouched, could have swapped weapons, could have moved forward or backward etc.

The Stadia prediction will rule out unlikely ones and not prerender those. It will prerender a "short list" of variations, once the input is received or the AI decides to send the stream frame down then it cycles again and again in a continuous dance of what variations to render and which one to send, rinse repeat.

So it's rather moot what you say, this is the basis of the AI/machine learning/predictions.

Let's say that they just simulate you pressing the shooting button as seen on the Outatime video I shared. Only that and nothing else. Right there it would mean that the Stadia server would have a frame on queue ready to be delivered the second the input from the client's side arrives. That right there would be an example of negative latency as defined by Google. Right now you are simply jumping to conclusions and are refuting several things that are not even part of the Interview made to the Stadia engineer. Don't know why put a non supported location of the world as an example.

See above, They can't just render one prediction over one simple input. They have to decide which variations to render then decide which one to actually send. This isn't going to be a 100% solution. Think of it as dynamic resolution solutions, predictions are going to sometimes be correct and save latency round trips/time and sometimes they're going to be incorrect and have to render as normal and extend that latency round trip/time.

There is room for developer decisions here e.g. they could simply overrule player inputs to appear to have a smooth/real time game, They could choose to have latency/lag as they correct predictions mistakes, they may have a sync system in place to keep things ticking along.

Here's a dumbed down version for ya - you watch a YouTube video and their server and your local computer cache/download what you are watching ahead of time so it's smooth and high quality. There is also bit rate and resolution jumps up and down as things slow down or speed up. Now introduce variations that you can change the video you are watching any given frame and YT is going to predict which you will be inputting and watching any given frame or interval. Gaming introduces that level of variations of inputs and context while you play games such as fighting or FPS games.


Where is this 10 times the processing power comment comes from? And 10 times what thing? An Xbox One, PS4, 10K Gaming PC? Where are you getting this information?

From the quote by the Google dev themselves -

Google Stadia will be faster and more responsive than local gaming systems in "a year or two," according to VP of engineering Madj Bakar. Thanks to some precog trickery, Google believes its streaming system will be faster than the gaming systems of the near-future, no matter how powerful they may become. But if the system is playing itself, does that really count?

Speaking with Alex Wiltshire in Edge magazine #338, Google's top streaming engineer claims the company is verging on gaming superiority with its cloud streaming service, Stadia, thanks to the advancements it's making in modelling and machine learning. It's even eyeing up the gaming performance crown in just a couple of years.

So they have to render a shortlist of variations in game, this means they have to render multiple variations of the game on the fly and repeat that ongoing. This requires more processing power than a local hardware version rendering just one game world at per frame.
 

EVIL

Senior Concept Artist
Verified
Oct 27, 2017
2,790
Snarky reply - I guess you need to extrapolate it from my previous reply.

Honest reply - Here's the rub - It's not just interpolation of graphics or extrapolation of a player simply moving forward. Stadia devs are saying they are going to predict the likely outcomes of what a player may do before they do it then render/process/calculate a range of preemptive variations but only deliver the one needed, ready to go off the shelf, once the gamer decides the input, perhaps even prior to the input being made.

It's bullshit, taking even just digital inputs and not analog sensitivity that a player on any given input can make dozens of choices for. So Stadia devs further claim the highest level of processing with Stadia over say Xbox or PS5 next gen because of the cloud? Bullshit again. They are not going to render dozens of variations every single frame, they are not going to deliver that on every frame render and then extrapolation and interpolation techniques are the ending results smoothing over a myriad of developer/processing choices being made via prediction of the player based on AI or machine learning.

It's a great system on paper but there is no fucking way they're rendering enough variations per frame in say FPS games like Halo or Apex and having "negative latency" in direct comparison to local hardware on a real world Internet connection from say Australia, where Stadia isn't even launching. It's a fucking pipe dream and this dev is deluded or a snake oil salesmen.

I love the tech concept and yes it's worthy of pushing further but don't sell it like this. It's not comparable. Don't honestly try and tell me Stadia is going to provide 10 times the processing power per gamer over a next gen console at home to deliver on negative latency and get it anywhere near 75% correct on every frame or preemptive interval they decide.
MP games make "predictions" all the time for well over a decade now about what a player might do. these are micro predictions in the milliseconds and generally are very successful but like in MP games you might get some mismatches. normally these aren't noticeable but the higher the latency the more you get these artifacts (rubberbanding) where the prediction has to correct every time it gets updated. the longer time between updates the more dramatic the effect. I can imagine Stadia using a similar approach and its not some nonsense. What they are talking about are in the range of frames, prediction milliseconds ahead to bridge the noticeable latency gap

simple but technical demonstration of this stuff is explained here

edit: I am not saying its exactly this, but you can do these predictions locally to further reduce latency and nobody knows what machine learning can bring to the table. All this possible future real time game data from thousands of players being studied by AI, building some prediction algorithm
 
Last edited:

Ox Code

Member
Jul 21, 2018
376
That Outatime video (which will be posted another 18,003 times and still be ignored) is a good reference point to remember that there are so many areas that have yet to be explored with a cloud-only gaming infrastructure.

And also that you probably shouldn't be immediately throwing shade on Google engineers when they talk about cloud and networking. You'd be better off trying to convince Steph Curry that he's bad at three pointers.
 

Deleted member 4372

User requested account closure
Banned
Oct 25, 2017
5,228
I've been saying this for a long time. People don't understand that you can do some pretty insane shit these days to eliminate input latency. AI prediction, asynchronous timewarp. I believe them.

People here seem to think it's input -> render -> display.

Asynchronous timewarp changes that to input -> render -> input -> adjust render -> display. You can virtually eliminate most noticeable input latency with that alone.

Considering that render latency tends to be 60-100ms+ in most games, a 20-40ms round trip latency to the server is not a dealbreaker here. Cloud infrastructure offers a lot more possibilities to reduce input and render latency than local hardware can.

Go away with your unfounded scientific research and lack of hyperbole.
 

Csr

Member
Nov 6, 2017
2,039
I could be way off but it sounds a lot like rollback netcode in fighting games, which while it is pretty good, if implemented correctly, it is obviously not better than local gaming.
I imagine there will be warping/teleporting if the code predicts wrong and if ping is beyond a certain point but i could see how for some folks it could be indistinguishable from offline.
I also assume it requires the game to be developed with this tech in mind so it will probably be implemented on games exclusives to streaming platforms and if these services really take off maybe more widespread adoption but i don't think this will happen any time soon and also as with rollback netcode not all implementations will be good.

This sounds promising but imo they should choose their words better and not make claims like faster and more responsive than local gaming without having something to show first because they sound like they are trying to sell me snake oil.
 

Alucardx23

Member
Nov 8, 2017
4,716
Done with this one, it ain't just interpolation and extrapolation. You said that, it isn't correct, that's the misinformation.

Where was it said that this was the only thing that could be happening? Have you actually taken the time to read the other post from exodus? Just like me, he has given several examples of what might be happening here, based on the information we have so far.

Of course that is how the prediction works, it can't just take one guess and render one frame back like a local console does. The prediction tech will whittle down a list of possible machine learnt outcomes, once the player input(s) are parsed the ready to go latency saving frame is sent. This is of course the methodology of how the tech works. The technical specifics we're not privileged to as yet but the method of how this works is around in other non gaming tech etc.

Of course there are dozens of variations, take a simple list of possible inputs or combination of inputs on a given frame or prediction interval (as I don't believe the will do this for every single frame) and you'll see it is variations and a small pooled list of which ones to prerender. Take a moment in Halo for example. Player decides to move left, shoot and jump all at the same frame/interval timing. Now the variation for prediction is that player could have stayed where they were, could have moved right, could have crouched, could have swapped weapons, could have moved forward or backward etc.

The Stadia prediction will rule out unlikely ones and not prerender those. It will prerender a "short list" of variations, once the input is received or the AI decides to send the stream frame down then it cycles again and again in a continuous dance of what variations to render and which one to send, rinse repeat.

So it's rather moot what you say, this is the basis of the AI/machine learning/predictions.

Again, where are you getting the dozen variations comment? You are talking like if you got a direct quote from the Google engineer explaining to you exactly how this will work. We don't know for example if the predicted frame is sent right away (like in Outatime) or they wait until they receive the input from the client and then send the frame. That right there are two very different scenarios that might be happening and based on the information we have so far we cannot say. If you want to speculate, speculate all you want, but don't present your information as some quote or direct knowledge from Stadia. We do have the Outatime video as a likely reference to what might be happening and we have the DF video stating how they might be using motion vectors to simulate what the next possible frame might be like. So go ahead, if you somehow think that the Outatime video is imposible or why is it not possible to use motion vectors to simulate where the screen will move, go ahead and make your argument.


See above, They can't just render one prediction over one simple input. They have to decide which variations to render then decide which one to actually send. This isn't going to be a 100% solution. Think of it as dynamic resolution solutions, predictions are going to sometimes be correct and save latency round trips/time and sometimes they're going to be incorrect and have to render as normal and extend that latency round trip/time.

There is room for developer decisions here e.g. they could simply overrule player inputs to appear to have a smooth/real time game, They could choose to have latency/lag as they correct predictions mistakes, they may have a sync system in place to keep things ticking along.

Here's a dumbed down version for ya - you watch a YouTube video and their server and your local computer cache/download what you are watching ahead of time so it's smooth and high quality. There is also bit rate and resolution jumps up and down as things slow down or speed up. Now introduce variations that you can change the video you are watching any given frame and YT is going to predict which you will be inputting and watching any given frame or interval. Gaming introduces that level of variations of inputs and context while you play games such as fighting or FPS games.

Above there is nothing that answers what I said. They could just have the gun animation ready to be sent at any given moment, once the server receives a trigger input. Something like that would definitely help. Even on local games not all animations have the same input latency. For example the jump button on Halo 3 takes more time to show the animation than the shooting animation. So yes, the second we see the first game that has something like that, it would be an example of what Google is describing as negative latency.

From the quote by the Google dev themselves -

So they have to render a shortlist of variations in game, this means they have to render multiple variations of the game on the fly and repeat that ongoing. This requires more processing power than a local hardware version rendering just one game world at per frame.

So let me see if I understand, the quote "Google Stadia will be faster and more responsive than local gaming systems in "a year or two", allows you to say that Google said that they will provide 10 times the processing power? Is that what you are trying to say here?
 
Last edited:

Deleted member 20284

User requested account closure
Banned
Oct 28, 2017
2,889
MP games make "predictions" all the time for well over a decade now about what a player might do. these are micro predictions in the milliseconds and generally are very successful but like in MP games you might get some mismatches. normally these aren't noticeable but the higher the latency the more you get these artifacts (rubberbanding) where the prediction has to correct every time it gets updated. the longer time between updates the more dramatic the effect. I can imagine Stadia using a similar approach and its not some nonsense. What they are talking about are in the range of frames, prediction milliseconds ahead to bridge the noticeable latency gap

simple but technical demonstration of this stuff is explained here

edit: I am not saying its exactly this, but you can do these predictions locally to further reduce latency and nobody knows what machine learning can bring to the table. All this possible future real time game data from thousands of players being studied by AI, building some prediction algorithm

Yep, I've been getting killed by Halo network prediction for years. I'm aware of a few techniques in use for the last decade or two. I have serious doubts this technical solution isn't a solution for gamers to replace local hardware with a win win as I realise it's more a corporate play to Google's strengths and entry points to a market e.g. they're going to make a killing with this low investment tech for mobile/tablet players and grab a "lower cost end" segment of the console market at the same time.

It's not going to be comparable to a dedicated next gen console, it's not going to predict with 100% accuracy and it's not going to outcompute next generations PCs or consoles due to the overhead in the prediction tech. The point of this thread isn't whether the tech is good or not, it's about the claims of the VP of engineering. I stand by my comment he's deluded or snake oil selling the shit out of this. Especially for someone like me from Australia. I'm really not wanting to play a game where the cloud is deciding what I'm going to do before I do, overrule that shit due to latency thresholds all while delivering a fluctuating bit rate stream and eating up bandwidth for a bandwidth restrictive country like Australia.

It ain't "faster and more responsive than local gaming systems in "a year or two,"" to use the exact words of VP of engineering Madj Bakar.
 
Last edited:

EVIL

Senior Concept Artist
Verified
Oct 27, 2017
2,790
Yep, I've been getting killed by Halo network prediction for years. I'm aware of a few techniques in use for the last decade or two. I have serious doubts this technical solution isn't a solution for gamers to replace local hardware with a win win as I realise it's more a corporate play to Google's strengths and entry points to a market e.g. they're going to make a killing with this low investment tech for mobile/tablet players and grab a "lower cost end" segment of the console market at the same time.

It's not going to be comparable to a dedicated next gen console, it's not going to predict with 100% accuracy and it's not going to outcompute next generations PCs or consoles due to the overhead in the prediction tech. The point of this thread isn't whether the tech is good or not, it's about the claims of the VP of engineering. I stand by my comment he's deluded or snake oil selling the shit out of this. Especially for someone like me from Australia. I'm really not wanting to play a game where the cloud is deciding what I'm going to do before I do, overrule that shit due to latency thresholds all while delivering a fluctuating bit rate stream and eating up bandwidth for a bandwidth restrictive country like Australia.

It's ain't "faster and more responsive than local gaming systems in "a year or two,"" to use the exact words of VP of engineering Madj Bakar.
Its a big shoe to fill but certainly not an impossible one. Off course you will always have edge cases, but to say because you live in Australia and you suffer from issues because you are in effect an edge case, does not make the tech less possible. For most people, network prediction works 99.9% of the time, does this mean the tech doesn't work? no. Just because you live somewhere where its most likely to fail does not make the technology less possible.
You say you live in Australia with tons of shit internet and high latency, so off course you are going to bash a pure online service like Stadia which most likely wont perform as it should in your area of the woods.

Why even comment on this stuff at all if you are not even going to use it. Are you afraid your current gen or next gen console will suddenly disappear? poof? Don't worry. Standard Consoles will still be around for decades, long enough for your end of the world to upgrade its internet infrastructure. You talk as if progress never happens, and technology doesn't advance. in 10 years the gaming landscape will be different and streaming will be an integral part of it.
 

Alucardx23

Member
Nov 8, 2017
4,716
Found some more examples for how input latency varies per game.

"But in the meantime, while overall "pings" between console and gamer remain rather high, the bottom line seems to be that players are now used to it, to the point where developers - like Infinity Ward - centred on getting the very lowest possible latencies are using that to give their games an edge over the competition. Call of Duty's ultra-crisp response is one of the key reasons why it's a cut above its rivals, and it's a core part of a gameplay package that will once again top the charts this Christmas."

latency.png