I am looking to gather opinions to check my assumptions, so I will try to keep my assertions to a minimum. Please challenge me.
We have a physics-based vehicle simulation running in an engine which does not support fixed step updates (neither for the entire game thread or for physics only - nothing!). The approach with which I am most familiar was the one demonstrated in the Overwatch Netcode GDC talk (and referenced by Rocket League's) - clients running ahead of server, input buffering on the server, server can tell clients to speed-up or slow down based on RTT, and clients re-simulate themselves using local input buffer whenever there is an error in their prediction vs the historical server auth state received N frames in the past.
I have been investigating client side prediction, and it seems like every canonical example we seem to uphold (Gabriel Gambetta's articles, Overwatch talks, Rocketleague talk, etc) relies on a fixed-step update so that the client can predictively execute e.g. 5 frames of W+D input, then 2 frames of W+A input, and so on, and have the server execute the same input on the same frame for the same duration, reducing divergence between prediction and the authoritative state received back on the client after RTT+buffer time.
Has anyone seen this done without fixed steps? What does it look like? I know Unreal doesn't use a fixed step and does have CSP, but that's for characters which use a kinematic, hand-written movement system and not a physics engine. That really isn't an option for us.
How do you reason about how long to apply inputs locally vs the server if neither side is on the same page wrt frame length? I am guessing you can send timestamps + duration for each input, and have the server make a best-guess at applying your inputs for the specified duration (issues with lining this up with server frames notwithstanding).
I would appreciate any insight or experiences in this area so that I don't get target-fixated on this strategy and perform unnecessary work retrofitting fixed-step updates into a tech stack which does not have them.
Many thanks.