Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. Database & SysAdmin
  3. Database
  4. Maintain max x records

Maintain max x records

Scheduled Pinned Locked Moved Database
databasesql-serversysadminquestion
17 Posts 10 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • M Mycroft Holmes

    Delete from mytable
    where ID not in (select top 100 from mytable order bt ID desc)

    This works fine and with the minute numbers you are working with it will be Ok, with serious numbers I would have a cleanup job that runs periodically

    Never underestimate the power of human stupidity RAH

    E Offline
    E Offline
    Eduard Keilholz
    wrote on last edited by
    #4

    Thanks for the reply, but my question was not how to accomplish the removal but wether my solution using a stored procedure was OK.

    .: I love it when a plan comes together :. http://www.zonderpunt.nl

    M 1 Reply Last reply
    0
    • P PIEBALDconsult

      I like the idea of a cleanup job that runs periodically. :thumbsup:

      E Offline
      E Offline
      Eduard Keilholz
      wrote on last edited by
      #5

      That's more like the reply I was looking for. I didn't know if the sp would perform if the cleanup is integrated. A job it is, thanks! ;)

      .: I love it when a plan comes together :. http://www.zonderpunt.nl

      1 Reply Last reply
      0
      • E Eduard Keilholz

        Hey guys, I have a SQL Server database in which I want to store history data for a couple of object. However, I don't want the table to keep 'old' data. Therefore I want to have a maximum amount of x (in my case 100) records. Since I have 5 objects available the max numer of records in my table cannot be larger than 500. The way I accomplish this, is to insert records using a stored procedure. The stored procedure checks the amount of records available for that object. If the amount is larger than 99 it removes (recordamount - 99) records. Then the stored procedure inserts the new value. Is there a neater way to accomplish this? Thanks a lot! Eduard

        .: I love it when a plan comes together :. http://www.zonderpunt.nl

        P Offline
        P Offline
        Pete OHanlon
        wrote on last edited by
        #6

        You could always do this via a trigger on the table - as long as you have some form of feature to sort on to identify older posts.

        I have CDO, it's OCD with the letters in the right order; just as they ruddy well should be

        Forgive your enemies - it messes with their heads

        My blog | My articles | MoXAML PowerToys | Onyx

        1 Reply Last reply
        0
        • P PIEBALDconsult

          I like the idea of a cleanup job that runs periodically. :thumbsup:

          D Offline
          D Offline
          David Skelly
          wrote on last edited by
          #7

          The only problem with a periodic scheduled job is that you may temporarily find yourself with more rows in the table than you are supposed to have. A The scheduled job runs and leaves 100 rows in the table B I insert another row: there are now 101 rows in the table C At some later point, the job runs again and leaves 100 rows in the table In between points B and C, you have 101 rows in the table. That may not matter but it may be important. Even if the job is running frequently, you cannot guarantee that a query won't "see" the table between B and C and produce a spurious result. So, depending on the requirement a scheduled job may be OK if you don't need to guarantee that the row count limit will always be enforced, otherwise a stored proc or a trigger may be the best way to go (although I am not a fan of triggers in general).

          C E 2 Replies Last reply
          0
          • D David Skelly

            The only problem with a periodic scheduled job is that you may temporarily find yourself with more rows in the table than you are supposed to have. A The scheduled job runs and leaves 100 rows in the table B I insert another row: there are now 101 rows in the table C At some later point, the job runs again and leaves 100 rows in the table In between points B and C, you have 101 rows in the table. That may not matter but it may be important. Even if the job is running frequently, you cannot guarantee that a query won't "see" the table between B and C and produce a spurious result. So, depending on the requirement a scheduled job may be OK if you don't need to guarantee that the row count limit will always be enforced, otherwise a stored proc or a trigger may be the best way to go (although I am not a fan of triggers in general).

            C Offline
            C Offline
            Chris Meech
            wrote on last edited by
            #8

            This would be a good example of when to *not* use a trigger. Likely your trigger code is going to select on the very table table that fired the trigger in the first place. Ensuring that this doesn't open the door for unlimited recursion could become difficult. If there is a high water mark of how many rows are allowed, making use of a procedure to enforce that is your best bet. I've seen circular logs where the procedure will update existing records, instead of deleting and inserting something new. :)

            Chris Meech I am Canadian. [heard in a local bar] In theory there is no difference between theory and practice. In practice there is. [Yogi Berra] posting about Crystal Reports here is like discussing gay marriage on a catholic church’s website.[Nishant Sivakumar]

            J 1 Reply Last reply
            0
            • E Eduard Keilholz

              Thanks for the reply, but my question was not how to accomplish the removal but wether my solution using a stored procedure was OK.

              .: I love it when a plan comes together :. http://www.zonderpunt.nl

              M Offline
              M Offline
              Mycroft Holmes
              wrote on last edited by
              #9

              I do wonder about the business reason to enforce this record limit, that would drive the decision to use a proc or a scheduled job.

              Never underestimate the power of human stupidity RAH

              E 1 Reply Last reply
              0
              • E Eduard Keilholz

                Hey guys, I have a SQL Server database in which I want to store history data for a couple of object. However, I don't want the table to keep 'old' data. Therefore I want to have a maximum amount of x (in my case 100) records. Since I have 5 objects available the max numer of records in my table cannot be larger than 500. The way I accomplish this, is to insert records using a stored procedure. The stored procedure checks the amount of records available for that object. If the amount is larger than 99 it removes (recordamount - 99) records. Then the stored procedure inserts the new value. Is there a neater way to accomplish this? Thanks a lot! Eduard

                .: I love it when a plan comes together :. http://www.zonderpunt.nl

                P Offline
                P Offline
                PIEBALDconsult
                wrote on last edited by
                #10

                I just remembered a third-party system I had to work with a few years ago. There was a particular table that always held 8000 records -- they got reused.

                1 Reply Last reply
                0
                • M Mycroft Holmes

                  I do wonder about the business reason to enforce this record limit, that would drive the decision to use a proc or a scheduled job.

                  Never underestimate the power of human stupidity RAH

                  E Offline
                  E Offline
                  Eduard Keilholz
                  wrote on last edited by
                  #11

                  Hey! I have 8 servers which I want to report their performance. I use performance counters to aquire system information. Depending on the counter's category and name, I want to counter to store it's value every second, or (if less important) with a lower interval (say a minute). I want to keep the 100 latest inserted records per performance counter to report the machine's performance. Since some counters store their value each second the database table may grow rapidly. I expect about 160 performance counters running at the same time measuring performance of 8 different servers. So there's no need to guarantee a max number of records, I can select the latest x records when reporting and a job may clean up the 'old' data. I also may want to switch to keeping the latest 1000 records or something but I don't know about that yet. I think the job is the best performing solution for this... Thanks guys!

                  .: I love it when a plan comes together :. http://www.zonderpunt.nl

                  S 1 Reply Last reply
                  0
                  • D David Skelly

                    The only problem with a periodic scheduled job is that you may temporarily find yourself with more rows in the table than you are supposed to have. A The scheduled job runs and leaves 100 rows in the table B I insert another row: there are now 101 rows in the table C At some later point, the job runs again and leaves 100 rows in the table In between points B and C, you have 101 rows in the table. That may not matter but it may be important. Even if the job is running frequently, you cannot guarantee that a query won't "see" the table between B and C and produce a spurious result. So, depending on the requirement a scheduled job may be OK if you don't need to guarantee that the row count limit will always be enforced, otherwise a stored proc or a trigger may be the best way to go (although I am not a fan of triggers in general).

                    E Offline
                    E Offline
                    Eduard Keilholz
                    wrote on last edited by
                    #12

                    Thanks David! I don't need to guarantee a max number of records. This[^] post explains what I want my software to do

                    .: I love it when a plan comes together :. http://www.zonderpunt.nl

                    1 Reply Last reply
                    0
                    • E Eduard Keilholz

                      Hey guys, I have a SQL Server database in which I want to store history data for a couple of object. However, I don't want the table to keep 'old' data. Therefore I want to have a maximum amount of x (in my case 100) records. Since I have 5 objects available the max numer of records in my table cannot be larger than 500. The way I accomplish this, is to insert records using a stored procedure. The stored procedure checks the amount of records available for that object. If the amount is larger than 99 it removes (recordamount - 99) records. Then the stored procedure inserts the new value. Is there a neater way to accomplish this? Thanks a lot! Eduard

                      .: I love it when a plan comes together :. http://www.zonderpunt.nl

                      D Offline
                      D Offline
                      Dwayne J Baldwin
                      wrote on last edited by
                      #13

                      The neatest way would be to pre-allocate (insert) 100 (or 500) records in your table. Your stored proc should simply update the oldest record using an implicit autocommit transaction. The data type used to determine the oldest record depends on the possible update frequency. This guarantees your original request of maintaining max x records at all times.

                      Dwayne J. Baldwin

                      1 Reply Last reply
                      0
                      • E Eduard Keilholz

                        Hey! I have 8 servers which I want to report their performance. I use performance counters to aquire system information. Depending on the counter's category and name, I want to counter to store it's value every second, or (if less important) with a lower interval (say a minute). I want to keep the 100 latest inserted records per performance counter to report the machine's performance. Since some counters store their value each second the database table may grow rapidly. I expect about 160 performance counters running at the same time measuring performance of 8 different servers. So there's no need to guarantee a max number of records, I can select the latest x records when reporting and a job may clean up the 'old' data. I also may want to switch to keeping the latest 1000 records or something but I don't know about that yet. I think the job is the best performing solution for this... Thanks guys!

                        .: I love it when a plan comes together :. http://www.zonderpunt.nl

                        S Offline
                        S Offline
                        Spectre_001
                        wrote on last edited by
                        #14

                        Unfortunately, the very act of recording performance data has an impact on system performance. The best approach imho for performance reporting, if possible, is to periodically do performance analysis on the business data recorded by the system during the normal operation.

                        Kevin Rucker, Application Programmer QSS Group, Inc. United States Coast Guard OSC Kevin.D.Rucker@uscg.mil "Programming is an art form that fights back." -- Chad Hower

                        1 Reply Last reply
                        0
                        • C Chris Meech

                          This would be a good example of when to *not* use a trigger. Likely your trigger code is going to select on the very table table that fired the trigger in the first place. Ensuring that this doesn't open the door for unlimited recursion could become difficult. If there is a high water mark of how many rows are allowed, making use of a procedure to enforce that is your best bet. I've seen circular logs where the procedure will update existing records, instead of deleting and inserting something new. :)

                          Chris Meech I am Canadian. [heard in a local bar] In theory there is no difference between theory and practice. In practice there is. [Yogi Berra] posting about Crystal Reports here is like discussing gay marriage on a catholic church’s website.[Nishant Sivakumar]

                          J Offline
                          J Offline
                          Jason Christian
                          wrote on last edited by
                          #15

                          Well, if your ON INSERT Trigger only deletes records - you aren't going to get a circularity problem.

                          1 Reply Last reply
                          0
                          • E Eduard Keilholz

                            Hey guys, I have a SQL Server database in which I want to store history data for a couple of object. However, I don't want the table to keep 'old' data. Therefore I want to have a maximum amount of x (in my case 100) records. Since I have 5 objects available the max numer of records in my table cannot be larger than 500. The way I accomplish this, is to insert records using a stored procedure. The stored procedure checks the amount of records available for that object. If the amount is larger than 99 it removes (recordamount - 99) records. Then the stored procedure inserts the new value. Is there a neater way to accomplish this? Thanks a lot! Eduard

                            .: I love it when a plan comes together :. http://www.zonderpunt.nl

                            U Offline
                            U Offline
                            User 4061826
                            wrote on last edited by
                            #16

                            Check out RRDtool http://oss.oetiker.ch/rrdtool/[^] It's exactly what you need, it's a round robin database (i.e. when it has reached the end, it overwrites the first entries) and it is specifically designed for graphing sensors from various sources (server load, network traffic, temperature sensors, etc). Good luck!

                            E 1 Reply Last reply
                            0
                            • U User 4061826

                              Check out RRDtool http://oss.oetiker.ch/rrdtool/[^] It's exactly what you need, it's a round robin database (i.e. when it has reached the end, it overwrites the first entries) and it is specifically designed for graphing sensors from various sources (server load, network traffic, temperature sensors, etc). Good luck!

                              E Offline
                              E Offline
                              Eduard Keilholz
                              wrote on last edited by
                              #17

                              Cool, i'll take a peak, thanks a lot!

                              .: I love it when a plan comes together :. http://www.zonderpunt.nl

                              1 Reply Last reply
                              0
                              Reply
                              • Reply as topic
                              Log in to reply
                              • Oldest to Newest
                              • Newest to Oldest
                              • Most Votes


                              • Login

                              • Don't have an account? Register

                              • Login or register to search.
                              • First post
                                Last post
                              0
                              • Categories
                              • Recent
                              • Tags
                              • Popular
                              • World
                              • Users
                              • Groups