Re: Bezier lengths w/o lots of resources

Hi Peter,

Peter Dickerson wrote:
"D Yuniskis" <> wrote in message news:i0derm$en9$1@xxxxxxxxxxxxxxxxxxxx

OK, I made a stab at trying to "adjust" this for
a more efficient algorithm (2.5/iteration).

Sorry, I don't understand where the 2.5 sqrts/iteration comes from. When you split the line and have two sets of four control points you need eight distances (sqrt) but none of those have been computed at the previous level. The only place I see five appear is that the are five new points to calculate, one of which is used twice (giving six) and the start and end points from the original (now we have eight). None of that uses sqrts and the time to calculate it is potentially insignificant relative to the sqrts (e.g. it can be done with integers of sufficient size by rescaling by eight at each nesting).

Given a Start point (S), departing control point (D), arriving
control point (A), and End point (E), [i.e., these are P0 - P3]:

The first iteration computes the lengths of |SD|, |DA|, |AE| and
compares them against |SE| (i.e., 4 sqrts).

Then the curve is subdivided into two half-curves ("Left" and
"Right"). The control points for the two half curve are:
M = (D+A)/2

L0 = S
R3 = E

L1 = (S+D)/2
R2 = (A+E)/2

L2 = (L1+M)/2
R1 = (M+R2)/2

L3 = R0 = (L2+R1)/2

Looking at the Left half curve L0, L1, L2, L3:

Compute |L0L1|, |L1L2| and |L2L3| and compare against |L0L3|.

But, |L0L1| is actually just half of |AD| -- which we already
have from the first iteration!

However, we do have to compute |L1L2|. And |L0L3|.
There's two sqrts.

OTOH, when we compute |L2L3|, this is really half of |L2R1|.
So, whatever result we get there we can use for the |R0R1|
computation! So, that's a 0.5 sqrt.

(Did I do that right?)

Trusting your "results", I assume the table would be:

sqrts DY length DY Romberg
4 1.000000000
5 1.002470605 1.002635312
10 1.000128079 0.999971911
20 1.000007988 0.999999982
40 1.000000499 1.000000000

I.e., at this point, I have invested *79* sqrts (4+5+10+...).
Note, however, that some of those might never occur if the
estimate for a particular "subcurve" (ick!) looks "good
enough" (it need not be further subdivided).

You've still lost e.

Assume I am correct with 2.5 sqrts per "half curve".
And 4 on the first iteration.

So, second iteration is 5 (2.5 + 2.5) sqrts (assuming
I am not looking at the closeness of fit).

Third iteration cuts each of the two half curves from
the second iteration in half using 5 sqrts for each
of the second iteration half curves -- 10 sqrts total.

20 for the next. etc.

Since my algorithm starts with the *first* ITERATION
and conditionally subdivides, it takes on the costs of
each iteration as it subdivides the curve(s). So,
my total cost is the sum of the costs of *every*

I.e., my code computes "sum of segments" and "endpoint
length"; checks to see how close they are to each other
(or, does your magic Romberg estimate and compares
*that* to these lengths to see how closely *it*
fits); and, if the fit isn;t good enough, *then*
it subdivides the curve and does another iteration of
chord length computations. (but, it has already
incurred the cost of the computations leading up to
that decision to recurse!)

If I blindly force N subdivisions, it changes the mix.

I'll try to make some time to code up a "real" algorithm
and instrument SQRT() to do the counting for me with
various "estimate tolerances".

Surely you can identify any 'sharp' regions of the curve from the radius of curvature (or at least its square, otherwise you'll need a sqrt for that). A proxy might be dx/dt*d2y/dt2 - dy/dt*d2x/dt2 being small.