Re: OT: Re: Perl Peeves

On 2009-02-05 00:24, Bruce Cook <bruce-usenet@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> wrote:
Peter J. Holzer wrote:

On 2009-02-03 14:59, Bruce Cook
<bruce-usenet@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> wrote:
Uri Guttman wrote:
forcing a specific value to be the one 'true' false (sic :) is a
waste of time and anal. it is like normalizing for no reason and it
bloats the code. even the concept of using the result of a boolean
test for its numeric or string value bothers me. a boolean test
value (of any kind in any lang) should only be used in a boolean
context. anything else is a slippery shortcut that makes the code
more complex and harder to read.

That's basically where I'm coming from - I have an immediate cringe when
I see the result of a test being used as an int.

I find this odd from someone who claims to have an extensive background
in assembler and early C programming. After all, in machine code
everything is just bits. And early C had inherited quite a bit (no pun
intended) from that mindset, if mostly via its typeless predecessors
BCPL and B.

It's basically a background thing. As you say everything is just bits. The
earlier compilers I work with were all 16 bit, and literally everything was
a 16-bit int, pointers, *everything*

floats and longs weren't, I hope.

(even when chars were passed into functions, there were passed 16-bit
(to satisfy even-byte boundary constraints),

char (or more precisely any integral type smaller than int) is promoted
to (unsigned) int when passed to a function without a prototype. This is
still the case in C.

manipulated 16-bit, you just ignored the top 8-bits of the word in
certain operations.

In arithmetic expressions, char (or more precisely any integral type
smaller than int) is promoted to (unsigned) int. This is still the case
in C.

To add to this, the compilers didn't do a lot of sanity checking, the
compiler just assumed you knew what you were doing and would
faithfully "just do it".

That's what lint was for, as you note below. If you have only 64 kB
address space, you want to keep your programs small.

Early compilers didn't have function prototyping (a function prototype
was a syntax error),

Prototypes were originally a feature of C++ and were introduced into C
with the C89 standard. I think I've used compilers which supported them
before that (gcc, Turbo C, MSC, ...) but it's too long ago for me to be

void was a keyword introduced to the language later, so void * was
unheard of in most code.

About the same time as prototypes, although I don't think I've ever used
a C compiler which didn't support it, while I've used several which
didn't support prototypes.

This engendered very fast and loose usage of ints for everything. In a lot
of early code you'd see pointers, chars and all sorts declared as int and
some truely horrendous coding:
Code could have been done properly using unions, however that was work and
because everyone knew what was really happening in the background why

This all came crashing down when we started porting to other platforms,
which had different architecture driven rules.

As they say, "port early, port often". Thankfully I was exposed to the
VAX and 68k-based systems shortly after starting to program in C, so I
never got into the "I know what the compiler is doing" mindset and
rather quickly got into the "if it isn't documented/specified you cannot
rely on it" mindset.

It became quite common for a project to have a project-wide header file
which defined the projects' base datatypes and one of the common ones that
turned up was:

typedef int bool;

This didn't mean bool was special, declaring it just signaled to the
programmers that they were dealing with an int that had certain meaning.

That's a good thing if the "certain meaning" is documented and strictly
adhered to.

In systems programming you would get things like this simplistic example:

bool is_rx_buffer_full (int buffer) {
return (qio_buffer_set[buffer]->qio_status & QIO_RX_FLAGS);

So a "bool" isn't a two-valued type - it can take many values. This is
not what I expect from a boolean type.

note that this function is declared as returning bool, which implies that
what it returns should only be used in a conditional expression. If you
tried to use it as an int, you could, but you wouldn't get what you

Actually I would get what I expect if I treat your "bool" as an int, but
not what I expect when I treat your "bool" as what I expect from a
boolean type.

Expectations differ.

So documentation is very important and this is (to get back to Perl) why
I criticised that the "true" return value of several operators in Perl
is not documented.

The whole industry hit the portability issue at about the same time. This
lead to a lot of the modern features of C, including posix, function
prototypes, a lot of the standard header files, many of the standard
compiler warnings and of course the C standards. Others decided that C was
just stupid for portability and created their own language (DEC used BLISS,
which was an extremely strongly typed language and served them well across
many very different architectures)

Actually, BLISS is older than C, so it can't have been developed because
people were disappointed by C. Also according to wikipedia, BLISS was
typeless, not "extremely strongly typed".

I find it odd that
normalization of bool results is built into the compiler,

What "normalization of bool results is built into the compiler"?

c= (a || b)

as you say, these are just ints like everything else in C.
Easiest way to compile that on most architectures would be:

mov a,c
bis c,b ; bis being the '11 OR operator

Not generally, because

* || is defined to be short-circuiting, so it MUST NOT evaluate
b unless a is false.
* a and b need not be integer types.

And of course the result of the operation is defined as being 0 or 1.

I don't see this as "normalisation", because there is no intermediate
step which does a bit-wise or.

c = (a || b)

is semantically exactly equivalent to

c = (a != 0 ? 1
: (b != 0 ? 1
: 0))

It is not equivalent to

c = ((a | b) != 0 ? 1 : 0)

(Of course in some situations an optimizing compiler may determine that
it is equivalent in this specific situation and produce the same code)

I find it very useful that operators built into the language returned a
defined result. If anything, C has too many "implementation defined" and
"undefined" corners for my taste.

Yes, but I think it's also one of the strengths of C. You define your own
rules to make it fit to your needs for a particular project and as long as
you're consistent and design those rules properly it all works.

Modern languages try to address these undefined corners, but it often makes
them difficult to use for some applications.

I strongly disagree with this. The various implementation defined and
undefined features of C don't make it simpler for the application
programmer - on the contrary, they make it harder, sometimes a lot
harder. What they do simplify (and in some cases even make possible) is
to write an efficient compiler for very different platforms.