(no title)
mmorse1217 | 4 months ago
Since you’re interested in doing this on GPU, an approach that might be interesting to you (although not necessarily more efficient) would be to leverage the intrinsic properties of Bezier curves to feed a near-optimal initial guess to Newton. Some useful facts about Bezier curves: i) Bezier control points form a convex hull of the curve they define ii) Bezier curves defined on [0,1] can be split into two bezier curves, each defined on [0,t] and [t,1] that define the same curve, with a tighter control polygon. iii) This Bezier curve splitting can be done using repeated linear combinations of Bezier control points, so you can skip evaluating Bernstein polynomials directly. iv) there is a mapping from Bezier control points to their corresponding value in the [0,1] parameter space (the term for this for B-Splines is greville abcissae, I’m not sure that there is an explicit name for the equivalent for Bezier curves, but basically the preimage of control point b_i of a degree d curve is i/d, i=0,…,d+1).
These things together sort of imply an algorithm: 1. Subdivide the Bezier curve c into 2 or 3 curves c_1, c_2, c_3 2. Find the closest control point b_j to the target point x 3. Choose the curve c_i corresponding to b_j: this subcurve contains the closest point to x 4. Go to step 1 and repeat this loop several times with c = c_i 5. Then, compute the preimage of the closest control point b_j to x on c (j/d plus some shift and rescaling). This value, t’, will be the initial guess to Newton’s method. 6. Solve for the closest point on the selected subcurve c to x with Newton’s method; this should converge in very few steps because your initial guess is so good, quadratic convergence, blah, blah blah.
The break-even point for this kind of algorithm vs. a derivative based algorithm is very unclear on CPU. But, for GPU, I think the computation can be structured in an architecture friendly way; since computing the euclidean distance between x with all control points and the bezier curve splitting can written in a vectorizable manner, you will probably see a decent speed up. I’ve only really worked with CUDA though, so I’m not sure if this idea maps very cleanly to GLSL.
Here’s an example of the algorithm above for CPU if you are interested: https://github.com/qnzhou/nanospline/commit/5ac97722414dbc75...
GistNoesis|4 months ago
I am happy to see other people use the approach I consider more natural.
It's a generic global optimization approach and geometry is always full of pathological edge cases, so it's hard to tell if you miss any. Getting to work in the average case is usually easy, but to be sure it always work is much harder.
mmorse1217|4 months ago
Yes the real trouble is true optimality guarantees. I remember that there were edge cases of the above approach needed that a lot of subdivision steps to succeed for general degree curves, so it might end up worse than a more rigorously justified approach in these cases.
ux|4 months ago
mmorse1217|4 months ago
I think this is the main reference for this algorithm (should be able to search for the pdf): https://www.sciencedirect.com/science/article/abs/pii/S01678...