Re: Data alignment and endianess

On 01/26/2011 12:17 PM, Ben Bacarisse wrote:
James Kuyper<jameskuyper@xxxxxxxxxxx> writes:

On 01/26/2011 09:23 AM, Ben Bacarisse wrote:
No, and I don't see how you suspect me of such an attitude. My argument
is solely with the oft-quoted (and to my mind naive) arithmetic that I
originally commented on. That arithmetic could be used to argue *against*
writing efficient software (just as it can also be used to argue in it's
favour). It is sometimes worth taking care of efficiency even when then
sum of time saved by others won't add up to the time spent optimising it
(and, equally, it sometimes is worth it).

The point I was trying to make is that the math being used is
perfectly valid.

Ah. You asked me if I though developers would be right to take the
attitude that "no one cares about saving 2.5 milliseconds". I think
it's understandable that I did not get that you were just talking about
the validity of the arithmetic.

If you define "valid" I might be able to comment. ...

In this context, validity might not be precisely the right term, but what I mean is that it is fact reasonable and appropriate to perform that multiplication as part of the process of deciding whether it's appropriate to bother performing an optimization. At least, conceptually - hard numbers to put into the calculation are often difficult or impossible to obtain, but the basic concept that saving a an infinitesimal amount of time on sufficiently many different occasions can be worth doing, even though each of the individual time savings are too small to even be perceived, is something that it is important to be aware of.

... I am not saying that
1,000 times 1ms != 1s. I am saying that 1,000 separate milliseconds are
not always as useful as the single second (even when scaled by some
value multiplier as you suggest). I believe that in some situations
there is a non-liner scale that devalues small savings.

I disagree. To me, the typical situation is that the value of taking a certain amount of time to perform a task can, at least conceptually, be described as a function of that amount of time, V(t). V(t) is usually a decreasing function (the parts of the curve where it isn't (if any) aren't normally of relevance to optimization). The value of decreasing a single performance of the task by delta_t is, to a good approximation, -dV(t)/dt * delta_t. The value of decreasing the time taken on a million performances of the task is then -1000000 * dV(t)/dt * delta t. If you want a little more realism, there's a different V_i for each task, so the more realistic formula is

- Sum over i of (dV_i(t)/dt * delta_t)

However, that can simply be re-written as

- 1000000 * Average over i of (dV_i(t)/dt) * delta_t

- Average over i of (dV_i(t)/dt) * 1000000 * delta_t

The first factor in that product as the value multiplier.

In order for this to be a non-linear function of delta_t for small values of delta_t, then dV(t)/dt * delta_t would have to be a VERY bad approximation to V(t+delta_t) - V(t). There are functions for which that is true, but only near singularities in the function. Such functions do not seem plausible to me as descriptions of the value of completing a task in the amount of time t.

James Kuyper