Re: On writing negative zero - with or without sign

tholen@xxxxxxxxxxxx wrote:

Z = (Y - X)/A


Z = Y/A - X/A

are algebraically equivalent. Suppose (Y - X)/A underflows, but
neither Y/A or X/A underflows, and the sign bit wouldn't be set
for the difference.

If you expect algebraic equivalence to result in computational
equivalence you're *really* clueless about numerical computing.
No float arithemetic always obeys the distributive or associative
rules. Fact of life. Live with it.

Assuming both the above result with a zero value of Z (of some
sign), they'll compare equal. That's more than you've any right
to expect.

I wrote:
Then the *actual* difference is approximately the same magnitude
as the error in the least accurately calculated of the two operands.
For nearly all calculations that's a lots bigger than zero.

You're assuming that the numbers are calculated. I had previously
given an example in which the numbers being differenced were
assigned, not calculated (or manipulated, to use my previous term).

And you *know* those assigned values are *infinitely* accurate?
I'm astonished. What were the units of measure that you used
to find numerical constants that can be written in a finite decimal
literal, that convert *exactly* to a binary float representation,
and are *infinitely* accurate with some genuine physical meaning?
That's some feat! And, if you already know they're equal, why
are you subtracting them? Surely just assigning 0.0 instead of
using an expression is better?

Suppose in your temperature example, the first value was really
15.01 and the second value was 14.99, but the implementation
couldn't know that and used a plus sign, thus lying to the user.

That's that *you've* been requesting! A plus sign and an absent sign
has *identical* meaning. Any case, the float implementation didn't
have the *actual* numbers and does the best it can. It can't reinvent
the missing information. The thermometer rounded to the nearest
tenth and that's that.

The I/O implementation *does* have sign information and you
advocate throwing it away. That's *not* the best it can do. You're
destroying information.

But, an absent sign *means* positive zero!

I disagree.

Go ahead. You can pretend that when I say "tea" it doesn't
mean a beverage if you choose. You will not be showing much
rational inclination for communicating with others if you do. I'm
telling you what the meaning of signs is within the IEEE standard.
They weren't particularly interested in numerological mysticism
when they designed the floating point representation. They had
practical problems to solve. Signed zeros are an advance on
previous floating point implementations: if they're meaningless
you can ignore them (but that should be each user's choice, the
I/O implementer should not be able to force an arbitrary decision
on others), often they aren't meaningless - often they're important.
They do *NO* damage when they're meaningless. They help a lot
when they're not.

J. Giles

"I conclude that there are two ways of constructing a software
design: One way is to make it so simple that there are obviously
no deficiencies and the other way is to make it so complicated
that there are no obvious deficiencies." -- C. A. R. Hoare