Re: data compression program

In article <1146097479.386877.191860@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>,
"Einstein" <michaelhh@xxxxxxxxx> wrote:

Jee Barb, you sure live up to your name.

Let's see: Saint Barbara (for whom Santa Barbara is named) is the patron
saint of Architects, Builders, Dying, Fireworks, Lightning, Miners,
Prisoners, and Storms (according to
glossary.htm>). Maybe this relates to you being a Prisoner of your
provably crank idea.

In logic, "Barbara" is the mnemonic for syllogisms of the form:
All A are B, and all B are C, implies all A are C.
Maybe this relates to your disdain for simple logic.

Shannon was wrong.

Because you say so?

Oh, there IS a finite size requirement... but random
binary data is actually something you can create order out of. Provided
the length is long enough.

That's just silly. Restoring part of what you snipped:

You can only losslessly compress out the REDUNDANCY in data,
and truly random data has no redundancy. Oops.

It's not about pure math. Math is not everything...

No it's not, but it does have the huge advantage of actually making

math does not cover
a Huffman appropriately for instance.

11= 1
10 = 01
01 = 001
00 = 000

It's not typical of math to have this sort of results. Therefore
classic math FAILS.

Now pay attention, since this is YOUR example. Assume the 4 words 11,
10, 01, 00 occur equally often (as they will in a random bit stream).
Then the uncompressed stream takes 2 bits per word. Using your encoding
above (which, BTW, would be a Huffman coding ONLY IF the words did NOT
have the same frequencies, i.e. were not from a random bit stream),
on average a word takes 2.25 bits (= (1+2+3+3)/4). THAT is what
"classic math" says; what does your cranky math say?

80% completed on software. Final swap tables are made.

"Ninety-eight percent complete / But it's been that way for weeks"