Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. General Programming
  3. Managed C++/CLI
  4. DeflateStream & GZipStream

DeflateStream & GZipStream

Scheduled Pinned Locked Moved Managed C++/CLI
questionhelp
6 Posts 2 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • A Offline
    A Offline
    ant damage
    wrote on last edited by
    #1

    I've been using the DeflateStream and the GZipStream for a while and recently I found out that both streams produced an excessive amount of data when compressing data. Here is my question: Is there a bug in the compression algorythm?

    L 1 Reply Last reply
    0
    • A ant damage

      I've been using the DeflateStream and the GZipStream for a while and recently I found out that both streams produced an excessive amount of data when compressing data. Here is my question: Is there a bug in the compression algorythm?

      L Offline
      L Offline
      Lost User
      wrote on last edited by
      #2

      ant-damage wrote:

      produced an excessive amount of data when compressing data

      What are you compressing? The LZ77 algorithm can actually create MORE data depending on what your compressing.

      ant-damage wrote:

      Is there a bug in the compression algorythm?

      No. Best Wishes, -David Delaune

      A 1 Reply Last reply
      0
      • L Lost User

        ant-damage wrote:

        produced an excessive amount of data when compressing data

        What are you compressing? The LZ77 algorithm can actually create MORE data depending on what your compressing.

        ant-damage wrote:

        Is there a bug in the compression algorythm?

        No. Best Wishes, -David Delaune

        A Offline
        A Offline
        ant damage
        wrote on last edited by
        #3

        I'm compressing a binary file. it is a 678KB file, and after compressing all bytes of it with DeflateStream I've got 1MB of data. I used the zlib compression library on the same file and I'v got a smaller file with 671KB. I searched around the web and I found out that it's possible that the DeflateStream algorythm has a bug, and generates too much data compared with the original file data.

        L 2 Replies Last reply
        0
        • A ant damage

          I'm compressing a binary file. it is a 678KB file, and after compressing all bytes of it with DeflateStream I've got 1MB of data. I used the zlib compression library on the same file and I'v got a smaller file with 671KB. I searched around the web and I found out that it's possible that the DeflateStream algorythm has a bug, and generates too much data compared with the original file data.

          L Offline
          L Offline
          Lost User
          wrote on last edited by
          #4

          ant-damage wrote:

          after compressing all bytes of it with DeflateStream I've got 1MB of data.

          Did you mean compressing with GZipStream? Because... DeflateStream does not do compression.

          ant-damage wrote:

          I searched around the web and I found out that it's possible that the DeflateStream algorythm has a bug, and generates too much data compared with the original file data.

          The GZipStream class does not check for incompressible data. Yes it is possible to create more data.. this is the nature of the algorithm. I guess it would be better if the GZipStream class detected that it would be unable to compress the data. Classifying it as a bug is debatable. I agree with you that the GZipStream class should be smart and detect this situation. Best Wishes, -David Delaune

          L 1 Reply Last reply
          0
          • L Lost User

            ant-damage wrote:

            after compressing all bytes of it with DeflateStream I've got 1MB of data.

            Did you mean compressing with GZipStream? Because... DeflateStream does not do compression.

            ant-damage wrote:

            I searched around the web and I found out that it's possible that the DeflateStream algorythm has a bug, and generates too much data compared with the original file data.

            The GZipStream class does not check for incompressible data. Yes it is possible to create more data.. this is the nature of the algorithm. I guess it would be better if the GZipStream class detected that it would be unable to compress the data. Classifying it as a bug is debatable. I agree with you that the GZipStream class should be smart and detect this situation. Best Wishes, -David Delaune

            L Offline
            L Offline
            Lost User
            wrote on last edited by
            #5

            Randor wrote:

            DeflateStream does not do compression

            Yes it does, if we are to believe the documentation[^], compared to GZipStream it leaves out the CRC and things like that.

            1 Reply Last reply
            0
            • A ant damage

              I'm compressing a binary file. it is a 678KB file, and after compressing all bytes of it with DeflateStream I've got 1MB of data. I used the zlib compression library on the same file and I'v got a smaller file with 671KB. I searched around the web and I found out that it's possible that the DeflateStream algorythm has a bug, and generates too much data compared with the original file data.

              L Offline
              L Offline
              Lost User
              wrote on last edited by
              #6

              I have found that the results are better when you use the biggest possible Write (so buffer first, then write it all out at once). It's slower of course, but for some reason this implementation gives really poor results on small writes. Such a big expansion seems excessive though. It can only happen due to crappy implementation, since the Deflate spec contains an escape mechanism to store uncompressed data with only tiny overhead, but then this implementation is not known for being good in any way.

              1 Reply Last reply
              0
              Reply
              • Reply as topic
              Log in to reply
              • Oldest to Newest
              • Newest to Oldest
              • Most Votes


              • Login

              • Don't have an account? Register

              • Login or register to search.
              • First post
                Last post
              0
              • Categories
              • Recent
              • Tags
              • Popular
              • World
              • Users
              • Groups