# Re: On writing negative zero - with or without sign

tholen@xxxxxxxxxxxx wrote:
James Giles writes:

To summarize:

Point one: when used to simulate the continuous real numbers,
floating point is *always* inexact.

I disagree. There is a subset of the continuous real numbers that
can be exactly represented with a finite number of bits.

It's a small of shaped subset of the rationals.. If and only if *ALL*
are members of that set can you say that the arithmetic is exact.
And I've already said that several times. So don't pretend you're
making any points. If *any* one of those points isn't true you're
floating point is inexact..

It's comical that you focus on the *KNOWN* most error prone
operation in all of floating point as youtr shining example
of exactitude: subtract. That shows such a complete lack
of understanding of the issues as to be astonishing.

There is no "exact zero", or any other real number.

That is a semantic argument. I provided one concept of an exact
zero. I argue that it's a reasonable concept. I've not seen anyone
offer a counter argument for it not being a reasonable concept.

What you offered is in conflict with the definitions of floating
point and comically with the common experience or users of
such numbers. Subreact is the *shortest* path away from
exactitude.

Nor is there any reason zeros can't have signs.

Agreed. But that doesn't mean that zeros must always have signs.

But IEEE zeros always do. Live with it.

Point two: on output the I/O implementation should always
correctly display a minus sign if the number's internal sign
bit was set. This is true for all magnitudes of values being
output. Not to do so is irresponsible.

That avoids the more fundamental question of how to decide when
to set that internal sign bit. [...]

BUZZ. The buzzer means you lose all your points and will have
to settle for your parting gifts. This is point two now. Yes, we
are here avoiding the fundamental question you mention. Within
point two, it is *irrelevant* how the bit got set. The only issue
is what the I/O library does with it. You keep avoiding that
issue. Which is strange since it's the principal subject of the

Now, another contributor to this thread consistently confuses
these separate points.

I'm not aware of any such contributor.

You just did it! You started in about a point one issue in response
to point two! For the question about what the I/O implementation
does, *HOW* the bit got set is *irrelevant*. The I/O library doesn't
know how the bit got set and therefore the decision can't be based
on that information.

On what basis do you claim that differences of identically
represented numbers are not exact zeros?

By definition. The differences to two inexactly known quantities
can't be exact. Since the purpose of floaing point is approximation
of continuous real numbers, they can *never* be regarded as exact.
The common observation of the programming community that
subtraction of similar values leaves essentially no significance
ought to be a clue. That's an empirical observation about the
result of using float. It most common to regard differences
resulting with zero with particular dismay - not only is the
relative error in the result *infinite*, the absolute error can't
even be estimated (not even to within a few orders of magnitude)
from just the value itself. Zero, as a difference, usually carries
no useful information at all.

By the way, the IEEE standard states explicitly how the sign bit
gets set in *all* operations except the one case where it makes no
difference (at least, not to anyone that understands what floating
point is doing). The only ambiguity is in your mind.

--
J. Giles

"I conclude that there are two ways of constructing a software
design: One way is to make it so simple that there are obviously
no deficiencies and the other way is to make it so complicated
that there are no obvious deficiencies." -- C. A. R. Hoare

.