Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. General Programming
  3. C / C++ / MFC
  4. how to load map of maps

how to load map of maps

Scheduled Pinned Locked Moved C / C++ / MFC
tutorialquestion
20 Posts 6 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • _ _Superman_

    Use the Matrix[^] class from the Boost library.

    «_Superman_»
    I love work. It gives me something to do between weekends.

    Microsoft MVP (Visual C++)

    Polymorphism in C

    B Offline
    B Offline
    b rad311
    wrote on last edited by
    #11

    Hi, I took a look at boost, but it didn't seem to have methods for finding values in the matrix or deleting rows, etc.

    D 1 Reply Last reply
    0
    • B b rad311

      Hi, I took a look at boost, but it didn't seem to have methods for finding values in the matrix or deleting rows, etc.

      D Offline
      D Offline
      David Crow
      wrote on last edited by
      #12

      It has begin() and end() methods that return iterators. Those can be used to find. For deleting, it has an erase_element() method. Put that in a loop to delete an entire row.

      "One man's wage rise is another man's price increase." - Harold Wilson

      "Fireproof doesn't mean the fire will never come. It means when the fire comes that you will be able to withstand it." - Michael Simmons

      "Man who follows car will be exhausted." - Confucius

      B 1 Reply Last reply
      0
      • D David Crow

        It has begin() and end() methods that return iterators. Those can be used to find. For deleting, it has an erase_element() method. Put that in a loop to delete an entire row.

        "One man's wage rise is another man's price increase." - Harold Wilson

        "Fireproof doesn't mean the fire will never come. It means when the fire comes that you will be able to withstand it." - Michael Simmons

        "Man who follows car will be exhausted." - Confucius

        B Offline
        B Offline
        b rad311
        wrote on last edited by
        #13

        Hi David, So could I find, for example, a zero in a given column using this technique? How would you use .begin() and .end() to do this? Thanks,

        D 1 Reply Last reply
        0
        • B b rad311

          Hi David, Thanks for replying. I'm trying to shy away from vector usage because, due to the large amount of data, vector storage and usage (particulary push_back) are killing the efficiency. So I'm trying to experiment with other methods such as map of maps.

          E Offline
          E Offline
          El Corazon
          wrote on last edited by
          #14

          b-rad311 wrote:

          Thanks for replying. I'm trying to shy away from vector usage because, due to the large amount of data, vector storage and usage (particulary push_back) are killing the efficiency. So I'm trying to experiment with other methods such as map of maps.

          maps actually begin to be inefficient at large numbers too. Remember that a map is internally stored as a binary tree in most implementations. This is highly efficient method of searching since it always becomes the equivalent of a quick search through your data to find any one node. However, the depth of the tree begins to slow down all operations as well. As mentioned, if you reserve space on a vector you can achieve high speed. Vector reallocates at 50% above your last block size when it runs out of memory, then shifts the contents over to a new array that is larger. It is the reallocation and move that occupies most of the time of the push_back. There are also alternatives, depending on what you want to achieve. I use pre-allocated vectors as large hash-tables, with the appropriately speedy hash algorithm you can beat a map on very large data sets. It all comes down to need and desired performance. so you spoke of a time key, but what are you storing and how do you need to access it?

          _________________________ John Andrew Holmes "It is well to remember that the entire universe, with one trifling exception, is composed of others." Shhhhh.... I am not really here. I am a figment of your imagination.... I am still in my cave so this must be an illusion....

          B 1 Reply Last reply
          0
          • B b rad311

            Hi David, So could I find, for example, a zero in a given column using this technique? How would you use .begin() and .end() to do this? Thanks,

            D Offline
            D Offline
            David Crow
            wrote on last edited by
            #15

            b-rad311 wrote:

            So could I find, for example, a zero in a given column using this technique?

            At first glance, it appears not, however, I've never used it.

            "One man's wage rise is another man's price increase." - Harold Wilson

            "Fireproof doesn't mean the fire will never come. It means when the fire comes that you will be able to withstand it." - Michael Simmons

            "Man who follows car will be exhausted." - Confucius

            1 Reply Last reply
            0
            • E El Corazon

              b-rad311 wrote:

              Thanks for replying. I'm trying to shy away from vector usage because, due to the large amount of data, vector storage and usage (particulary push_back) are killing the efficiency. So I'm trying to experiment with other methods such as map of maps.

              maps actually begin to be inefficient at large numbers too. Remember that a map is internally stored as a binary tree in most implementations. This is highly efficient method of searching since it always becomes the equivalent of a quick search through your data to find any one node. However, the depth of the tree begins to slow down all operations as well. As mentioned, if you reserve space on a vector you can achieve high speed. Vector reallocates at 50% above your last block size when it runs out of memory, then shifts the contents over to a new array that is larger. It is the reallocation and move that occupies most of the time of the push_back. There are also alternatives, depending on what you want to achieve. I use pre-allocated vectors as large hash-tables, with the appropriately speedy hash algorithm you can beat a map on very large data sets. It all comes down to need and desired performance. so you spoke of a time key, but what are you storing and how do you need to access it?

              _________________________ John Andrew Holmes "It is well to remember that the entire universe, with one trifling exception, is composed of others." Shhhhh.... I am not really here. I am a figment of your imagination.... I am still in my cave so this must be an illusion....

              B Offline
              B Offline
              b rad311
              wrote on last edited by
              #16

              Hello, Thanks for replying. I am storing forces and moments (so double or long doubles) for an engineering application. At times, there could be as many as 1 million entries that need to be stored. I need to be able to quickly locate zeroes and eliminate them along with all other values in the corresponding "rows".

              modified on Thursday, May 27, 2010 9:56 AM

              E 3 Replies Last reply
              0
              • B b rad311

                Hello, Thanks for replying. I am storing forces and moments (so double or long doubles) for an engineering application. At times, there could be as many as 1 million entries that need to be stored. I need to be able to quickly locate zeroes and eliminate them along with all other values in the corresponding "rows".

                modified on Thursday, May 27, 2010 9:56 AM

                E Offline
                E Offline
                El Corazon
                wrote on last edited by
                #17

                b-rad311 wrote:

                Thanks for replying. I am storing forces and moments (so double or long doubles) for an engineering application.

                You definitely don't want a map of maps because it doesn't sound like you have two indexes. You could use a map of a structure, or pair, and use the one index of time. However... at one million entries you are going to start noticing the significant hit of the search through the map. start with a quick review of performance considerations on STL: GDC 2001 Round Table Report[^] Custom STL Allocators (Power Point Presentation)[^] Common Performance Mistakes in Games (Power Point Presentation)[^] (this is my favorite) Ignore that the latter is for the Xbox, the issues are the same. Games require performance, performance on STL containers are a common problem, but usually because of a misunderstanding of how they work. When do you make your own? why? how? do you leverage the existing STL work when you do? (yes!!) If your time is sampled at a constant rate you directly turn time into an index, so I would use a vector, but I would pre-allocate memory with reserve such that the allocation occurs only once at the beginning and there after, random access direct memory speed will beat anything hands down. If you sample rate is not constant, and you need random access over linear reading, you can consider an STL form of a Hash when you get into a large data sets. There are still two possibilities: one-time hash, or bucket hash. One-time hashes are hard to set up, usually meaning you allocate more memory than you need by 50% in a vector, then generate an index from your time using a mathematical hash function which converts your key directly to an index value. If you choose your hash function well for your key-type, this works well, but requires more memory. If you choose your hash function poorly, your tend to reallocate when you hit duplicate keys to the a given index and there goes your performance down the tubes again.... A

                B 1 Reply Last reply
                0
                • E El Corazon

                  b-rad311 wrote:

                  Thanks for replying. I am storing forces and moments (so double or long doubles) for an engineering application.

                  You definitely don't want a map of maps because it doesn't sound like you have two indexes. You could use a map of a structure, or pair, and use the one index of time. However... at one million entries you are going to start noticing the significant hit of the search through the map. start with a quick review of performance considerations on STL: GDC 2001 Round Table Report[^] Custom STL Allocators (Power Point Presentation)[^] Common Performance Mistakes in Games (Power Point Presentation)[^] (this is my favorite) Ignore that the latter is for the Xbox, the issues are the same. Games require performance, performance on STL containers are a common problem, but usually because of a misunderstanding of how they work. When do you make your own? why? how? do you leverage the existing STL work when you do? (yes!!) If your time is sampled at a constant rate you directly turn time into an index, so I would use a vector, but I would pre-allocate memory with reserve such that the allocation occurs only once at the beginning and there after, random access direct memory speed will beat anything hands down. If you sample rate is not constant, and you need random access over linear reading, you can consider an STL form of a Hash when you get into a large data sets. There are still two possibilities: one-time hash, or bucket hash. One-time hashes are hard to set up, usually meaning you allocate more memory than you need by 50% in a vector, then generate an index from your time using a mathematical hash function which converts your key directly to an index value. If you choose your hash function well for your key-type, this works well, but requires more memory. If you choose your hash function poorly, your tend to reallocate when you hit duplicate keys to the a given index and there goes your performance down the tubes again.... A

                  B Offline
                  B Offline
                  b rad311
                  wrote on last edited by
                  #18

                  El Corazon, Thanks for all the info.

                  1 Reply Last reply
                  0
                  • B b rad311

                    Hello, Thanks for replying. I am storing forces and moments (so double or long doubles) for an engineering application. At times, there could be as many as 1 million entries that need to be stored. I need to be able to quickly locate zeroes and eliminate them along with all other values in the corresponding "rows".

                    modified on Thursday, May 27, 2010 9:56 AM

                    E Offline
                    E Offline
                    El Corazon
                    wrote on last edited by
                    #19

                    b-rad311 wrote:

                    I need to be able to quickly locate zeroes and eliminate them along with all other values in the corresponding "rows".

                    okay, I am back... sorry, but work comes first. Anyhow, tell me about this bit. So you are looking for any time that has zero in one of the other data items? so you are trying to find incomplete data sets and remove that time-period such that you have only complete data sets in the final composition of data stored? When you finally remove all the incomplete pieces are you then going to process the data in time series? will you be doing any random access by time? if you have no requirement for random access I would do a cull cycle on a list. A list is good for sequential access and horrible for any find() or random access method, but good on add/delete/insert. iteration one add all data to the list using time as your key.... iteration two using iterators, sequentially check data sets for zero values, remove from list. iteration three using iterators, sequentially do your processing of final dataset (this is an assumption on my part and may be wrong) if you need random access, then yes, we are probably back to maps, vectors handle add/deletes poorly, though if you are smart in your methods, and you have a way of turning time into an index you can do it. I'll go over both ways using methods I have done. map/multimap: (assumption: need random access, and sequential, and add/delete) (maps assume unique keys, multimaps assume duplicate keys exist, multimaps perform even worse) iteration one add all data to the map using time as a key iteration two using iterators, sequentially process data sets for zero values, remove from map/multimap iteration three using find() and iterators, do what ever processing is necessary vector: (assumption: key can be turned into linear index, needs random access, no insert or push_front, only push_back) (key assumption is index, recording data at 60 times a second means that index = floattime*60) setup reserve() necessary space to prevent allocation time loss iteration one add all data to vector with push_back only iteration two flag all structures as discarded that contain zeros iteration three process using random/sequential access combination (ignoring marked items) if all you ever do is sequential access you need only a list which has near the power of vector in sequential access, but finds are horrible time-wasters in lists....

                    _______________

                    1 Reply Last reply
                    0
                    • B b rad311

                      Hello, Thanks for replying. I am storing forces and moments (so double or long doubles) for an engineering application. At times, there could be as many as 1 million entries that need to be stored. I need to be able to quickly locate zeroes and eliminate them along with all other values in the corresponding "rows".

                      modified on Thursday, May 27, 2010 9:56 AM

                      E Offline
                      E Offline
                      El Corazon
                      wrote on last edited by
                      #20

                      Something I thought of when answering someone else's question, but it might be of use to you too. Have you considered a skip-list instead of a map? maps are binary trees, and skip lists occasionally offer speed improvements. It all comes down to need and use, but it is worth thinking about: http://www.codersource.net/microsoft-net/c-advanced/skip-list-a-simpler-alternative-to-binary-trees.aspx[^]

                      _________________________ John Andrew Holmes "It is well to remember that the entire universe, with one trifling exception, is composed of others." Shhhhh.... I am not really here. I am a figment of your imagination.... I am still in my cave so this must be an illusion....

                      1 Reply Last reply
                      0
                      Reply
                      • Reply as topic
                      Log in to reply
                      • Oldest to Newest
                      • Newest to Oldest
                      • Most Votes


                      • Login

                      • Don't have an account? Register

                      • Login or register to search.
                      • First post
                        Last post
                      0
                      • Categories
                      • Recent
                      • Tags
                      • Popular
                      • World
                      • Users
                      • Groups