Re: On writing negative zero - with or without sign



tholen@xxxxxxxxxxxx wrote:
James Giles writes:
....
But, the sign is present like it or not.

Only because you insist that a sign must be present. As I noted
in another posting, the situation is more complex than that.

No. The sign bit exists whether you like it or not because the
IEEE standard says so. It would still say so whether I insist
or not. There's a bit in each floating point value representing
the sign. That bit is always present. It has one of two values.
That follows from the fact that it's a bit. Nothing you've said
in any thread in any newsgroup at any time in the past or
even the future will make it more or less complex than
that. It's actually pretty simple.

No sign means the same as a plus sign.

Doesn't have to be that way. Sure, in a binary representation, if
you allocate only a single bit to the sign, then you can have only
two states. But we're talking about the external representation,
where in theory we can have many different characters, including
minus, plus, blank, greater than, less than, and even plus-minus.

And, you keep failing to specify what magic you think the I/O
library is going to use to infer more than one bit of information
from the one bit that's present. The sign has two states.

Using a different convention for zero magnitude values still
leaves the necessity to report which of those two states the
sign bit was in. And using said different notation just complicates
the language for no purpose, The present convention of using
a minus sign if the sign bit is set, and a plus sign (or nothing at
all) if the sign bit is clear answers admirably. It tells the whole
story. At least all of the story that the I/O library knows without
arbitrarily making up information out of the air!

That includes a correct
indication of the sign of the value being printed. This is a
very simple concept: the I/O library can't discern the meaning
of the value or whether the sign is meaningful or not.

Precisely. So in your thermometer example, where the sign isn't
meaningful, the processor doesn't know that, thus it has a fifty
percent chance of getting it wrong by being forced to display a
sign.

And it has a 100% chance of being wrong if it claims that the
answer is *exact* zero. That's not an improvement.

The existing implementation is writing all zero digits for the
value already. To anyone with an understanding of float, that
means the value (the result of a subtract), by itself, has no
significance at all, much less does the sign mean anything.

You keep claiming that the answer is being written wrong.
It isn't, you're just interpreting it wrong. If you insist on
interpreting a notational convention differently than it was
intended, you'll continue to make mistakes. As I said, you
can insist that when I say "tea" it doesn't mean a beverage:
you'll usually be wrong.

The user is frequently different from the programmer. It's up
to the programmer to supply sufficient information so that the
user has what he needs.

Then by gosh the programmer can write a string of plusses and
minuses or draw a picture of a clown if (s)he wants. Just don't
lie to the later reader by pretending it to be the actual report of
the actual calculation. Such an actual report can already be made
clearly by the output convention usually recommended.

What a responsible programmer will do is label the output.
Something like: "first differences of temperature readings".
If the reader of the data doesn't know what first differences are,
then such a reader won't make much sense of the data even if
you have some bizarre notation introduced for the zeros. If he
reader does know what first differences are, I doubt the sign on
any zero values will be the least disconcerting.

--
J. Giles

"I conclude that there are two ways of constructing a software
design: One way is to make it so simple that there are obviously
no deficiencies and the other way is to make it so complicated
that there are no obvious deficiencies." -- C. A. R. Hoare


.