2 hours ago, GBN said:
It is an action game taking inspiration from Stick Fight and as of right now I have snapshot interpolation implemented (un-optimized). I am currently trying to learn and work through all of my problems as they come along. It seemed that the input buffer would be needed regardless, and probably even a snapshot buffer for the same reasoning?
I honestly can't remember right now exactly the reasons for using an input buffering. However the overal concept always boils down to 'dampening': Smoothening out 'bumpiness' or 'jerkiness' by introducing latency so that we don't starve out of inputs or receive them all together.
This concept applies to rendering, input, networking, etc.
Think of trying to drink from a malfunctioning tap water: It sends huge bursts, then nothing, then huge bursts again. You can't drink from that!
You attach a bottle to the tap, wait to let it fill a little, then make a tiny hole in the bottle from where you can drink a constant, steady stream.
2 hours ago, GBN said:
So the server would have to process twice as fast as the information it receives?
Only if you decide to solve that problem in the way I described (only applies to deterministic simulations). If the server can't catch up, then players may experience 'pauses' (jitter) or 'slowdowns' (e.g. run a video at 0.75x speed).
Note that 25% packet drop doesn't necessarily mean 0.75x playback rate, unless the server is super slow (i.e. it can barely hit the target rate) and you're actually sending your packets very spaced apart. If you send more packets, you compensate for the packet drop.
This is a tricky thing, TCP assumes that if packets are being dropped, it's because servers are overwhelmed and cannot process it, hence improving packet sending rate makes things worse (thus you should send less). But in reality packets can also get drop because routes disappear, wifi signals are noisy, or Cablemodem/DSL cables have noise in them.
Also TCP is sent in order and if a few packets are lost, TCP stops the whole thing and waits until it receives the missing packets. It's like saying "everyone silence! I want to hear it slowly, say it again"; whereas Glenn's method never stops because it includes redudancy (packet C includes the info from packets A and B, so if packet C arrives then everything can proceed; eventually the client gets notified A,B & C have been acknowledged thus it should stop sending them in packet E; and you should only stop the whole thing if a lot of packets have gone unacknowledged which either means the other end is dead, or the connection is extremely poor)