Re: On writing negative zero - with or without sign

tholen@xxxxxxxxxxxx wrote:
It's the processor's misdirection, if it displays a sign on the
zero, not mine. You repeatedly ignoring the fifty percent
probability of the processor getting it wrong helps your cause
none at all.

Sigh... I've answered this too many times. I ignored nothing.
I was the first that stated the sign on the output of that *particular*
example was irrelevant. But, the sign is present like it or not.
No sign means the same as a plus sign. I don't care whether
you agree - it's the truth. There are only two possibilities:
plus zero of minus zero. There is no third choice. There's
not going to be a third choice. Get used to it. It's a fact of

Nor is this example even remotely relevant the the question
of what the I/O library should do with the sign of zero. The I/O
library *must* output all it knows. That includes a correct
indication of the sign of the value being printed. This is a
very simple concept: the I/O library can't discern the meaning
of the value or whether the sign is meaningful or not. I don't
see how anyone can fail to grasp such a simple concept.

What rule for magically discovering whether the sign has
meaning or not do you propose? Give specifics. Since you
have repeatedly asked "what if the user doesn't know if the
sign is meaningful or not?", your magical decision making
method can't rely on user input. If the user doesn't know
when reading the output, (s)he can't know at compile time
either. Unless you're planning to *always* supress the
zero? Ok, but no one will think your method is in the least
useful. You'll be disregarded entirely (and correctly

J. Giles

"I conclude that there are two ways of constructing a software
design: One way is to make it so simple that there are obviously
no deficiencies and the other way is to make it so complicated
that there are no obvious deficiencies." -- C. A. R. Hoare