# Re: On writing negative zero - with or without sign

James Giles writes:

But

Z = (Y - X)/A

and

Z = Y/A - X/A

are algebraically equivalent. Suppose (Y - X)/A underflows, but
neither Y/A or X/A underflows, and the sign bit wouldn't be set
for the difference.

If you expect algebraic equivalence to result in computational
equivalence you're *really* clueless about numerical computing.

Isn't the processor free to do that? You recently gave the example
of SQRT(X*X) being replaced by the equivalent ABS(X).

No float arithemetic always obeys the distributive or associative
rules. Fact of life. Live with it.

Assuming both the above result with a zero value of Z (of some
sign), they'll compare equal. That's more than you've any right
to expect.

You've raised yet another interesting concept: equality. Is a
negative zero equal to a positive zero?

Then the *actual* difference is approximately the same magnitude
as the error in the least accurately calculated of the two operands.
For nearly all calculations that's a lots bigger than zero.

You're assuming that the numbers are calculated. I had previously
given an example in which the numbers being differenced were
assigned, not calculated (or manipulated, to use my previous term).

And you *know* those assigned values are *infinitely* accurate?

I expect that two variables initialized to the same value will have
the same internal representation. That is

X = 1.0
Y = 1.0

X and Y better have the same internal representation. When you
difference them, I consider that to be an exact zero.

I'm astonished. What were the units of measure that you used
to find numerical constants that can be written in a finite decimal
literal, that convert *exactly* to a binary float representation,
and are *infinitely* accurate with some genuine physical meaning?

You're presupposing that I know the assigned values are infinitely
accurate. I never said that. All I said is that their internal
representations are the same, such that when you difference them,
you get an exact zero. Conversly, it's entirely possible that in

X = 1.000000000000000000000000000000000000000001
Y = 1.0

X and Y will have the same internal representation, thus the
difference would be an exact zero, yet their difference is
clearly not zero, and could be represented by the processor
(as +/- 1.0E-N, with the sign depending on which way you do
the differencing).

That's some feat! And, if you already know they're equal, why
are you subtracting them?

To demonstrate my concept of an exact zero. Nothing more. That is,
after all, what you asked about. Just trying to answer your question
in an understandable way.

Surely just assigning 0.0 instead of
using an expression is better?

If I had done that, would it have demonstrated my concept of an
exact zero any better for you?

But you've raised yet another interesting concept: can the user
initialize a variable with a minus zero? Or alternatively, when
the user initializes a variable to zero, did he give it a plus
zero or a minus zero? Initialzing a variable to zero is extremely
common, after all. Lots of compilers initialize all numeric
variables to zero. Are they plus zero, minus zero, or perhaps
an exact zero, if you accept such a concept?

Suppose in your temperature example, the first value was really
15.01 and the second value was 14.99, but the implementation
couldn't know that and used a plus sign, thus lying to the user.

That's that *you've* been requesting!

Not at all.

A plus sign and an absent sign has *identical* meaning.

Not according to the above example.

Any case, the float implementation didn't
have the *actual* numbers and does the best it can. It can't reinvent
the missing information. The thermometer rounded to the nearest
tenth and that's that.

So there's no way for the processor to know which sign to assign.

The I/O implementation *does* have sign information

How? The thermometer destroyed it.

and you advocate throwing it away.

I do not advocate throwing away information. I advocate not misleading
the user into thinking there is information present when there isn't.

That's *not* the best it can do. You're destroying information.

How so? It's the thermometer that destroyed the information by rounding
to the nearest tenth, and that's that. The result was a difference of
zero, and there's no way to know whether the difference should have been
positive or negative.

But, an absent sign *means* positive zero!

I disagree.

Go ahead. You can pretend that when I say "tea" it doesn't
mean a beverage if you choose. You will not be showing much
rational inclination for communicating with others if you do.

By talking about *infinitely* accurate above, you did not show any
inclination for communicating with me, because that's not at all
what I was talking about.

I'm
telling you what the meaning of signs is within the IEEE standard.

I'm telling you that there can be cases where the sign can't be
known to the processor, therefore to display a sign could be
misleading.

They weren't particularly interested in numerological mysticism
when they designed the floating point representation.

Neither am I.

They had
practical problems to solve. Signed zeros are an advance on
previous floating point implementations:

Only if people know how to interpret them properly, and only if
they are assigned properly. I've never had a compiler manual
document how it assigns the sign of a zero result, so how is the
user supposed to know how to interpret them?

if they're meaningless you can ignore them

You're assuming that the user will always know when they're
meaningless. That could be easy in simple cases. What about
complex cases, where the number you get out at the end is the
result of thousands or millions of intermediate calculations?

(but that should be each user's choice, the
I/O implementer should not be able to force an arbitrary decision
on others),

Exactly. In your thermometer example, the decision is arbitrary.

often they aren't meaningless - often they're important.
They do *NO* damage when they're meaningless.

That assumes the user knows they're meaningless.

They help a lot when they're not.

.

## Relevant Pages

• Re: Simple subtraction formula returning strange results = Excel g
... subtraction differ only in some number of the least significant bits ... The 64-bit floating-point representation is shown in the stylized hex form &hEEEMMMMM,M...M, where "E" is the biased exponent and "M" is the mantissa. ... always results in a zero in A3. ... A1 displays as 1.79769313486232E+308, but the first 30 digits of its exact representation are 179769313486231,57081452742373. ...
(microsoft.public.excel.worksheet.functions)
• Re: Casting an array to integer type
... its value when the sign bit is clear must be zero. ... succeed, in any of the three representation schemes, ... the result of the conversion is a trap representation, ... yield a trap representation and produce undefined behavior; ...
(comp.lang.c)
• Re: A ones complement sanity check, please
... representation of integer constant 0 and now how to zero memory ... errors in the 1's complement -1 and sign magnitude -1 representations. ... My questions are about 1's complement representation of the integer ...
(comp.lang.c)
• Re: Conditional as lvalue
... >>> representation is a normal value it is called a negative zero. ... If you look again at what I wrote above, you might notice that that's what I was trying to do -- I did not mean to say "this representation is a normal value" means that "this representation" and "a normal value" were the same classes, only that the representation that the standard refers to as "this representation" and one particular specimen of the class it refers to as "a normal value" are the same thing. ... If the bitwise AND is applied to the two representations, the result is a representation with one value bit set to one, and the padding bit set to zero. ... I have already said that the standard should make it clearer that the numeric value of a negative zero is zero. ...
(comp.std.c)
• A ones complement sanity check, please
... representation of integer constant 0 and now how to zero memory ... errors in the 1's complement -1 and sign magnitude -1 representations. ... My questions are about 1's complement representation of the integer ...
(comp.lang.c)