Compression : Some new Idea.
-
-
krishnadevank wrote: Can u explain it . Didn't you already explain it here?
"The pointy end goes in the other man." - Antonio Banderas (Zorro, 1998)
-
Your basic idea is good. You take advantage that text files contain bytes from a relatively small set (26 small and 26 capital letters), so the differences would be small. Yes, the result should be smaller than source. But read a specification of the LZW algorithm, it is a classic compression approach, when the source file contains bytes from a small set. It brings you idea even deeper. Robert-Antonio "I launched Norton Commander and saw, drive C: on the left, drive C: on the right...Damn, why I need two drives C:??? So I formatted one..."
But text files do not contain only 26 or 52 letters. They contain whitespace, other characters ... And the differences can not be any better than the whole. It takes 6 bits to represent 52 letters + some formatting chars. It will still take 6 bits to represent the differences otherwise you will not be able to have any Za words... And LZW would certianlly be better as text files like this will get very high compression if the data is real words and not random. What he has designed is a very poor encryption scheme with no compression at all. Just bit packing. John