Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. General Programming
  3. Design and Architecture
  4. How to store huge binary files without Database

How to store huge binary files without Database

Scheduled Pinned Locked Moved Design and Architecture
databasetutorialdockerquestion
23 Posts 5 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • M Offline
    M Offline
    Mercurius84
    wrote on last edited by
    #1

    Storing huge files without database Hi, I'm at my wits end. I just need some idea/brain storming here with the expert/professional. Scenario, I have 30 text files. (each files about 300mb-500mb) What i need to do is to convert these files into some sort of binary and store it some where. But not in SQL database. I have an intention to store these files into 'look alike' container. for example. I have container A and container B every each container has a cap size of 1GB. each and every text files will move into this container (until the quota reach) Once reach, it will move to container B, and so on ...to C, D, E.... On top of that, I will have an application to locate back these files too. Any medium/container that i can use for this purpose? Thanks

    P J 2 Replies Last reply
    0
    • M Mercurius84

      Storing huge files without database Hi, I'm at my wits end. I just need some idea/brain storming here with the expert/professional. Scenario, I have 30 text files. (each files about 300mb-500mb) What i need to do is to convert these files into some sort of binary and store it some where. But not in SQL database. I have an intention to store these files into 'look alike' container. for example. I have container A and container B every each container has a cap size of 1GB. each and every text files will move into this container (until the quota reach) Once reach, it will move to container B, and so on ...to C, D, E.... On top of that, I will have an application to locate back these files too. Any medium/container that i can use for this purpose? Thanks

      P Offline
      P Offline
      Pete OHanlon
      wrote on last edited by
      #2

      It wouldn't be hard for you to write one. I can't think of anything that fulfills this particular feature set out of the box, but what you have asked for isn't that complicated. Effectively, you'd just create a set of arrays and fill the arrays. Obviously, you couldn't hold all these arrays in memory at once, but it's easy enough for you to fill one, discard it before moving onto the next. A couple of thoughts - because we don't know what platform you are going to be running this on, we can't get much more specific. If, however, you are going to be running it on a Vista or later operating system, take a look at the Kernel Transaction Manager as that will help you protect the integrity of the files as you write them out because you can use transactions to support your file write. Oh, and whatever you do, make sure that the structures you save the files to get backed up regularly.

      Chill _Maxxx_
      CodeStash - Online Snippet Management | My blog | MoXAML PowerToys | Mole 2010 - debugging made easier

      M 1 Reply Last reply
      0
      • P Pete OHanlon

        It wouldn't be hard for you to write one. I can't think of anything that fulfills this particular feature set out of the box, but what you have asked for isn't that complicated. Effectively, you'd just create a set of arrays and fill the arrays. Obviously, you couldn't hold all these arrays in memory at once, but it's easy enough for you to fill one, discard it before moving onto the next. A couple of thoughts - because we don't know what platform you are going to be running this on, we can't get much more specific. If, however, you are going to be running it on a Vista or later operating system, take a look at the Kernel Transaction Manager as that will help you protect the integrity of the files as you write them out because you can use transactions to support your file write. Oh, and whatever you do, make sure that the structures you save the files to get backed up regularly.

        Chill _Maxxx_
        CodeStash - Online Snippet Management | My blog | MoXAML PowerToys | Mole 2010 - debugging made easier

        M Offline
        M Offline
        Mercurius84
        wrote on last edited by
        #3

        Hi Thanks Chill for the reply. It's undecided yet. either using .Net or Java. Dependent on complexity and ease of the job. I will pick up some info on Kernel Transaction Manager. However, does this KTM works fine with >600 GB to Terabyte files? Any issue or constraint it might have? Should there be other recommendation? I'm afraid my management may decide to reside the application on UNIX or other platform than Windows. Then I will be in trouble of revamping the core program.

        1 Reply Last reply
        0
        • M Mercurius84

          Storing huge files without database Hi, I'm at my wits end. I just need some idea/brain storming here with the expert/professional. Scenario, I have 30 text files. (each files about 300mb-500mb) What i need to do is to convert these files into some sort of binary and store it some where. But not in SQL database. I have an intention to store these files into 'look alike' container. for example. I have container A and container B every each container has a cap size of 1GB. each and every text files will move into this container (until the quota reach) Once reach, it will move to container B, and so on ...to C, D, E.... On top of that, I will have an application to locate back these files too. Any medium/container that i can use for this purpose? Thanks

          J Offline
          J Offline
          jschell
          wrote on last edited by
          #4

          Mercurius84 wrote:

          I have 30 text files. (each files about 300mb-500mb)
          What i need to do is to convert these files into some sort of binary and store it some where.

          Err.... File system already stores binary files. File system already has a hierarchy. File system is not a database. Any solution, including a database, uses the file system for storage. So exactly what is the problem?

          M 1 Reply Last reply
          0
          • J jschell

            Mercurius84 wrote:

            I have 30 text files. (each files about 300mb-500mb)
            What i need to do is to convert these files into some sort of binary and store it some where.

            Err.... File system already stores binary files. File system already has a hierarchy. File system is not a database. Any solution, including a database, uses the file system for storage. So exactly what is the problem?

            M Offline
            M Offline
            Mercurius84
            wrote on last edited by
            #5

            Hi, I just want to compress the files and package them into one container for a configurable size. Any idea of doing the packaging?(not zipping)

            L K J 3 Replies Last reply
            0
            • M Mercurius84

              Hi, I just want to compress the files and package them into one container for a configurable size. Any idea of doing the packaging?(not zipping)

              L Offline
              L Offline
              Lost User
              wrote on last edited by
              #6

              Mercurius84 wrote:

              I just want to compress the files and package them into one container for a configurable size.

              Mercurius84 wrote:

              Any idea of doing the packaging?(not zipping)

              How is "compressing and packaging" not zipping?

              Use the best guess

              M 1 Reply Last reply
              0
              • L Lost User

                Mercurius84 wrote:

                I just want to compress the files and package them into one container for a configurable size.

                Mercurius84 wrote:

                Any idea of doing the packaging?(not zipping)

                How is "compressing and packaging" not zipping?

                Use the best guess

                M Offline
                M Offline
                Mercurius84
                wrote on last edited by
                #7

                I have found the solution by this product: Hadoop Distributed File System (HDFS™): A distributed file system that provides high-throughput access to application data. Thanks :)

                J L 2 Replies Last reply
                0
                • M Mercurius84

                  I have found the solution by this product: Hadoop Distributed File System (HDFS™): A distributed file system that provides high-throughput access to application data. Thanks :)

                  J Offline
                  J Offline
                  jschell
                  wrote on last edited by
                  #8

                  Mercurius84 wrote:

                  Hadoop Distributed File System (HDFS™): A distributed file system that provides high-throughput access to application data.

                  Based only what you described as your needs this is overkill.

                  M 1 Reply Last reply
                  0
                  • M Mercurius84

                    Hi, I just want to compress the files and package them into one container for a configurable size. Any idea of doing the packaging?(not zipping)

                    K Offline
                    K Offline
                    Keld Olykke
                    wrote on last edited by
                    #9

                    Hi Merc, It just hit me when reading this post that your problem is similar to to the problem of splitting a large file into chunks. As an example if you have big archieve on disk and want to store it on removable devices e.g. floppy, cdrom or dvd. In the old days we used ARJ to split a compressed file into a number of volumes (.arj, .a01, .a02, etc.). Each volume had a fixed maximum size e.g. 1.44 MB for a floppy. A limitation was that you could not add/remove stuff from .a02 without breaking the big file, so if you need to do this, splitting big files into compressed volumes might not be your solution. If you want to play with this approach you can use rar. According to http://acritum.com/software/manuals/winrar/html/helparcvolumes.htm[^] this feature is called multivolume archieves. Kind Regards, Keld Ølykke

                    M 1 Reply Last reply
                    0
                    • M Mercurius84

                      Hi, I just want to compress the files and package them into one container for a configurable size. Any idea of doing the packaging?(not zipping)

                      J Offline
                      J Offline
                      jschell
                      wrote on last edited by
                      #10

                      Mercurius84 wrote:

                      I just want to compress the files and package them into one container for a configurable size.

                      Windows has compressed drives. Pretty sure every major OS does as well. However your requirements still don't meet that need. Again your requirements don't require a specialized system. Current file systems are more than capable of handling that trivial amount of data.

                      M 1 Reply Last reply
                      0
                      • J jschell

                        Mercurius84 wrote:

                        Hadoop Distributed File System (HDFS™): A distributed file system that provides high-throughput access to application data.

                        Based only what you described as your needs this is overkill.

                        M Offline
                        M Offline
                        Mercurius84
                        wrote on last edited by
                        #11

                        Hi, What do you mean by overkill? This HDFS has its limitation and does not locate the processing logic power? I have no personal experience to this product As per summarized: Hadoop is an Apache Software Foundation distributed file system and data management project with goals for storing and managing large amounts of data. Hadoop uses a storage system called HDFS to connect commodity personal computers, known as nodes, contained within clusters over which data blocks are distributed. You can access and store the data blocks as one seamless file system using the MapReduce processing model. HDFS shares many common features with other distributed file systems while supporting some important differences. One significant difference is HDFS's write-once-read-many model that relaxes concurrency control requirements, simplifies data coherency, and enables high-throughput access. In order to provide an optimized data-access model, HDFS is designed to locate processing logic near the data rather than locating data near the application space. It sounds promising.

                        J 1 Reply Last reply
                        0
                        • K Keld Olykke

                          Hi Merc, It just hit me when reading this post that your problem is similar to to the problem of splitting a large file into chunks. As an example if you have big archieve on disk and want to store it on removable devices e.g. floppy, cdrom or dvd. In the old days we used ARJ to split a compressed file into a number of volumes (.arj, .a01, .a02, etc.). Each volume had a fixed maximum size e.g. 1.44 MB for a floppy. A limitation was that you could not add/remove stuff from .a02 without breaking the big file, so if you need to do this, splitting big files into compressed volumes might not be your solution. If you want to play with this approach you can use rar. According to http://acritum.com/software/manuals/winrar/html/helparcvolumes.htm[^] this feature is called multivolume archieves. Kind Regards, Keld Ølykke

                          M Offline
                          M Offline
                          Mercurius84
                          wrote on last edited by
                          #12

                          Many thanks. But this process only works at the end of the document status. I do not think it would be able to append more files after been compressed and segmented the files I have a requirement too, which additional files can be added to after the above compression and segmentation.

                          K 1 Reply Last reply
                          0
                          • J jschell

                            Mercurius84 wrote:

                            I just want to compress the files and package them into one container for a configurable size.

                            Windows has compressed drives. Pretty sure every major OS does as well. However your requirements still don't meet that need. Again your requirements don't require a specialized system. Current file systems are more than capable of handling that trivial amount of data.

                            M Offline
                            M Offline
                            Mercurius84
                            wrote on last edited by
                            #13

                            Yes, However, we need to further shrinking down the file size and better file handling. Instead of OS handling.

                            J 1 Reply Last reply
                            0
                            • M Mercurius84

                              Many thanks. But this process only works at the end of the document status. I do not think it would be able to append more files after been compressed and segmented the files I have a requirement too, which additional files can be added to after the above compression and segmentation.

                              K Offline
                              K Offline
                              Keld Olykke
                              wrote on last edited by
                              #14

                              Normally, I don't think further modification of the multi-volume is possible, but I don't have the insight to tell you so. There might be one archive tool that can do it. Kind Regards, Keld Ølykke

                              1 Reply Last reply
                              0
                              • M Mercurius84

                                Yes, However, we need to further shrinking down the file size and better file handling. Instead of OS handling.

                                J Offline
                                J Offline
                                jschell
                                wrote on last edited by
                                #15

                                Mercurius84 wrote:

                                However, we need to further shrinking down the file size and better file handling.

                                And you base that on what exactly? What is your criteria? What is your desired improvement? How did you measure the criteria using the file system. How do you escape the fact that any such solution will STILL rely on the file system?

                                Mercurius84 wrote:

                                Instead of OS handling.

                                The OS has been optimized to handle files given that OS file systems are a key component of desktop OSes.

                                M 1 Reply Last reply
                                0
                                • M Mercurius84

                                  Hi, What do you mean by overkill? This HDFS has its limitation and does not locate the processing logic power? I have no personal experience to this product As per summarized: Hadoop is an Apache Software Foundation distributed file system and data management project with goals for storing and managing large amounts of data. Hadoop uses a storage system called HDFS to connect commodity personal computers, known as nodes, contained within clusters over which data blocks are distributed. You can access and store the data blocks as one seamless file system using the MapReduce processing model. HDFS shares many common features with other distributed file systems while supporting some important differences. One significant difference is HDFS's write-once-read-many model that relaxes concurrency control requirements, simplifies data coherency, and enables high-throughput access. In order to provide an optimized data-access model, HDFS is designed to locate processing logic near the data rather than locating data near the application space. It sounds promising.

                                  J Offline
                                  J Offline
                                  jschell
                                  wrote on last edited by
                                  #16

                                  Mercurius84 wrote:

                                  What do you mean by overkill...Hadoop is an Apache Software Foundation distributed file system and data management project with goals for storing and managing large amounts of data.

                                  Your stated requirements do not meet the definition of "large amounts of data". Let me give you some examples of large data - 2000 transactions a second sustained with a expected lifetime of 7 years and a real time need of 6 to 18 months immediate availability. Each transaction has a 1k size. - Each originator will produce several 100 meg downloads several times a month. Sizing must expect up to 10,000 originators with a lifetime of 5 years.

                                  M 1 Reply Last reply
                                  0
                                  • J jschell

                                    Mercurius84 wrote:

                                    What do you mean by overkill...Hadoop is an Apache Software Foundation distributed file system and data management project with goals for storing and managing large amounts of data.

                                    Your stated requirements do not meet the definition of "large amounts of data". Let me give you some examples of large data - 2000 transactions a second sustained with a expected lifetime of 7 years and a real time need of 6 to 18 months immediate availability. Each transaction has a 1k size. - Each originator will produce several 100 meg downloads several times a month. Sizing must expect up to 10,000 originators with a lifetime of 5 years.

                                    M Offline
                                    M Offline
                                    Mercurius84
                                    wrote on last edited by
                                    #17

                                    I get what you mean : )

                                    1 Reply Last reply
                                    0
                                    • J jschell

                                      Mercurius84 wrote:

                                      However, we need to further shrinking down the file size and better file handling.

                                      And you base that on what exactly? What is your criteria? What is your desired improvement? How did you measure the criteria using the file system. How do you escape the fact that any such solution will STILL rely on the file system?

                                      Mercurius84 wrote:

                                      Instead of OS handling.

                                      The OS has been optimized to handle files given that OS file systems are a key component of desktop OSes.

                                      M Offline
                                      M Offline
                                      Mercurius84
                                      wrote on last edited by
                                      #18

                                      I had an assumption that programming could do 'almost' wonderful things. for example: I have a file sized 10 MB. This special program shrinks down the size by ~30% (7MB for example) and then store it in to a container. What i mean by OS handling is unlike a normal compression and store them into a folder.

                                      J 1 Reply Last reply
                                      0
                                      • M Mercurius84

                                        I have found the solution by this product: Hadoop Distributed File System (HDFS™): A distributed file system that provides high-throughput access to application data. Thanks :)

                                        L Offline
                                        L Offline
                                        Lost User
                                        wrote on last edited by
                                        #19

                                        Why not use http://msdn.microsoft.com/en-us/library/system.io.compression.ziparchive.aspx[^]?

                                        Use the best guess

                                        1 Reply Last reply
                                        0
                                        • M Mercurius84

                                          I had an assumption that programming could do 'almost' wonderful things. for example: I have a file sized 10 MB. This special program shrinks down the size by ~30% (7MB for example) and then store it in to a container. What i mean by OS handling is unlike a normal compression and store them into a folder.

                                          J Offline
                                          J Offline
                                          jschell
                                          wrote on last edited by
                                          #20

                                          Mercurius84 wrote:

                                          This special program shrinks down the size by ~30% (7MB for example)
                                          and then store it in to a container.

                                          Fine - but why to you need to do that? What is the business or technical need that requires this?

                                          Mercurius84 wrote:

                                          What i mean by OS handling is unlike a normal compression and store them into a folder.

                                          Not sure what you mean by that - as I already said desktop OSes already support compression. So that point by itself is moot.

                                          M 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Don't have an account? Register

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups