Re: Computer time -> Developer time -> User time?
From: Randall Hyde (randyhyde_at_earthlink.net)
Date: Sat, 22 Jan 2005 17:55:39 GMT
"Frank Kotler" <email@example.com> wrote in message
> I don't doubt that this is true, but I don't think it tells
> the whole story. I don't think that the reasons Beth gives
> provide a complete explanation, either. Sure, "portability"
> is going to increase the code size, but to the extent that
> it provides "functionallity", and isn't just a "mantra",
> that isn't what I'd call "bloat". "Black box libraries"
> *can* be bloated, but not necessarily so. There *is* a
> difference between libraries, and a "concise" library would
> be just as effective in reducing developer time as a
> "bloated" one.
A large percentage of bloat, in modern applications, is due
more to "cut & paste" programming than via the use of
generic library functions :-)
> It seems to me that to account for the size of "modern
> apps", I have to assume that needed/desired features are
> being implemented in a "bloated" fashion, and thoughtlessly
> pasted onto already "bloated" code.
Yeah, like cutting some large piece of code elsewhere in
your program, changing a few lines, and pasting the result
back into the program. You'd be surprised how often this
occurs in bloated programs. It's one of the main reasons
good library support is *essential* for efficient program
development -- to avoid bloating your programs.
Yep, libraries can be misused. And once in a while, the
"cut & paste" specific approach is best. But, by and large,
people who program using "cut & paste" methodologies
(the "specific model") write bloated code.
> This *may* be
> intentional - the "average software consumer" may think that
> a program that takes 3 CDs to install is a "better value"
> than one that fits on a floppy, even if the functionallity
> and "feature set" is identical. I recall reading a post -
> long ago - from the author of a "password recovery" program,
> intended (nominally) to recover encrypted files to which the
> owner had forgotten the password (for some specific word
> processor, IIRC). He stated that his algorithm was so fast
> that he'd added a delay, so that his customers would think
> the program was "doing something".
> > Money talks.
> Yes, and what the "money spenders" like (apparently) might
> not match what someone who actually *knows* something about
> software would like. Money talks in another way, too:
> advertising. That has to be a factor in what people buy...
> or use. If Windows and Linux were compared on the basis of
> "users per advertising dollar spent", I'll bet Linux would
> look pretty good!
Actually, Linux is starting to look a whole lot like Windows.
Both in terms of money spent on it, acceptance, and software
delivery times. Like the Who once said, "meet the new boss,
same as the old boss." When (if) Linux finally displaces Windows
as *the* OS, all the elitists supporting Linux today will simply
move on to something else. It's just in their nature to want to
complain about the status quo.
> > "Sure Word is big and bloated. But that's what people
> > want. They won't pay for apps without a lot of features..."
> I don't know how big Word actually is, or exactly what
> features it has (I understand that, unless told otherwise,
> it'll execute any macro it encounters, possibly executing
> malicious code - not a feature I'd put on my "must have"
> list, but thats a different argument...) So perhaps Word
> isn't a good one for me to use as an example... but I'll
> take the chance. I don't think anyone is going to write a
> "feature for feature clone" of Word in "carefully
> hand-crafted assembly".
Word is a *large* software system. It has required
100s of programmers to develop. Developers with
average skills (think bell curve when we're talking
about these kinds of numbers). Getting that many
*great* programmers who know and are willing to
use assembly language to write several million lines of
code and get it all working together would be a bit
of a challenge, to say the least. Assembly programmers
are also notoriously independent. This is the major
failing in any attempt to use the Open Source Development
Model to claim that 100s of assembly programmers could
work together to create a large app like Word. It's hard
enough getting *two* independently-minded programmers
to work together. 100s of them? They'd spend all their
time arguing about how macros out to be implemented
in LuxAsm, er, Word :-)
> Wouldn't have to be all that
> "hand-crafted" - such a thing might be done using the
> high-level features of HLA, for example - not the smallest
> possible code, but not that bad (it's "not so humble" of me
> to have *any* opinion of Randy's code, perhaps... but I do)
Again, IMO the "bloat" in modern programs is due to poor
design and implementation (e.g., "cut & paste coding"), not to
the language being used. One can write a short and efficient
program in Perl. One can write a bloated program in assembly.
It's the developer, not the language, that determines the result.
> Even using C - in the manner that Herbert suggests: *learn*
> assembly so you understand what you're asking the processor
> to do, *know* your compiler well enough to understand what
> code it will emit (Herbert sometimes neglects to mention
> this part), and then write "thoughtful" C based on that
> knowledge - would do the trick.
I believe that this is the complete argument I make in my
new book series "Write Great Code". It doesn't matter what
language you use -- if you don't understand the low-level
operation of the machine, you're not in a good position to
write great code. Learning low-level coding is important to
anyone who wants to write great, efficient, code.
> "Just suppose" that someone wrote a "feature for feature
> clone" of Word - not necessarily squeezing out every byte,
> as the "smallest code" contestants do, but with *some*
> thought to writing "reasonably concise" code - *some*
> thought to avoiding obvious bloat... Any guess - just a
> rough estimate - of how the size would compare? I'd bet on
> "half" - possibly much less - without even knowing how big
> Word is now. Do you think I'm wrong?
Of course you're correct.
Anytime you start over on your code base and get to
learn from the mistakes of the past, you'll end up with
a better product. I don't recall if it was Dijkstra who
once said "Plan on writing your code twice, you will."
This is why, for example, my plans for HLA always
centered around creating a prototype first and a
real version as HLA v2.0. The v1.x release allowed
me to develop the language and make the mistakes that
could be corrected the second time around. The big
problem with large projects is that it isn't cost effective
to rewrite them from scratch. As a result, they keep building
on old code (and poor architecture). Hence, bloat.