(no title)
pahgawk | 7 months ago
- Part of my use case is that I build animation software. In there, you've got a timeline, and you can seek anywhere on the timeline. So in that scenario, you're not always moving consistently forward in time.
- In real-time contexts, sometimes you drop frames, even for simple motion, just due to the hardware it's being run on and what else the computer is doing. Simulations can be sensitive to time steps and produce slightly different results depending on them. The size of the issue depends on a lot of factors, but you don't have that issue at all with a closed form solution.
MITSardine|7 months ago
For your second point, you can decouple ODE time stepping and frame time stepping. I think it suffice to step until the lowest time ODE time step that is greater than the current frame time and interpolate between that and the previous ODE time step.
This technique is used in loosely coupled systems, for instance in mechanics where a rigid body needs a much lower time step (higher frequencies) than a soft body to compute the dynamics of, but you still need common time steps to compute interactions. Often times, the time steps are dictated by CFL conditions, and they may not even be integer multiples of each other.
However your first point is where I see the iterative approach really wouldn't work. Especially if the user might change the parameters before you've done anything with them, it wouldn't make sense to precompute values using the iterative scheme. Otherwise, that could be done, and then values interpolated between steps.
If you have few parameters, one solution that comes to mind if you cannot find a closed form solution is to grid the params space, and precompute the curve at each point of the grid. Then, when the user requests any value, simply localize that in the grid and interpolate from the nodes of the element. It becomes problematic if start and end points are parameters, though... In that case I suppose a linear transform of the curve to fit the end points precisely would be in order. You can consider the end points alone, it wouldn't be very complicated.
An improvement would be an adaptive grid; each time a point is inserted in a hypercube, compute also at the projections of that point on the hypercube's facets (and then the projection of that onto the facets' facets, and so on...), and split it. Consider whether to insert based on an indicator that takes into account the solutions at the nodes of the initial hypercube (if they are too distinct, a split is in order). Maybe this is too much complication for something this simple, but anyways there would be solutions for using more difficult ODEs that don't have closed-form solutions. Note if a single family of ODEs is considered, then this can be done offline, or online but cached.
There are more sophisticated methods like PINNs of course, but I don't know if you'd gain anything in performance versus a bespoke scheme. In particular if the inference step is more expensive than localizing in a kd-tree and interpolating from a few points.