Maintain max x records
-
Delete from mytable
where ID not in (select top 100 from mytable order bt ID desc)This works fine and with the minute numbers you are working with it will be Ok, with serious numbers I would have a cleanup job that runs periodically
Never underestimate the power of human stupidity RAH
Thanks for the reply, but my question was not how to accomplish the removal but wether my solution using a stored procedure was OK.
.: I love it when a plan comes together :. http://www.zonderpunt.nl
-
I like the idea of a cleanup job that runs periodically. :thumbsup:
That's more like the reply I was looking for. I didn't know if the sp would perform if the cleanup is integrated. A job it is, thanks! ;)
.: I love it when a plan comes together :. http://www.zonderpunt.nl
-
Hey guys, I have a SQL Server database in which I want to store history data for a couple of object. However, I don't want the table to keep 'old' data. Therefore I want to have a maximum amount of x (in my case 100) records. Since I have 5 objects available the max numer of records in my table cannot be larger than 500. The way I accomplish this, is to insert records using a stored procedure. The stored procedure checks the amount of records available for that object. If the amount is larger than 99 it removes (recordamount - 99) records. Then the stored procedure inserts the new value. Is there a neater way to accomplish this? Thanks a lot! Eduard
.: I love it when a plan comes together :. http://www.zonderpunt.nl
You could always do this via a trigger on the table - as long as you have some form of feature to sort on to identify older posts.
I have CDO, it's OCD with the letters in the right order; just as they ruddy well should be
Forgive your enemies - it messes with their heads
-
I like the idea of a cleanup job that runs periodically. :thumbsup:
The only problem with a periodic scheduled job is that you may temporarily find yourself with more rows in the table than you are supposed to have. A The scheduled job runs and leaves 100 rows in the table B I insert another row: there are now 101 rows in the table C At some later point, the job runs again and leaves 100 rows in the table In between points B and C, you have 101 rows in the table. That may not matter but it may be important. Even if the job is running frequently, you cannot guarantee that a query won't "see" the table between B and C and produce a spurious result. So, depending on the requirement a scheduled job may be OK if you don't need to guarantee that the row count limit will always be enforced, otherwise a stored proc or a trigger may be the best way to go (although I am not a fan of triggers in general).
-
The only problem with a periodic scheduled job is that you may temporarily find yourself with more rows in the table than you are supposed to have. A The scheduled job runs and leaves 100 rows in the table B I insert another row: there are now 101 rows in the table C At some later point, the job runs again and leaves 100 rows in the table In between points B and C, you have 101 rows in the table. That may not matter but it may be important. Even if the job is running frequently, you cannot guarantee that a query won't "see" the table between B and C and produce a spurious result. So, depending on the requirement a scheduled job may be OK if you don't need to guarantee that the row count limit will always be enforced, otherwise a stored proc or a trigger may be the best way to go (although I am not a fan of triggers in general).
This would be a good example of when to *not* use a trigger. Likely your trigger code is going to select on the very table table that fired the trigger in the first place. Ensuring that this doesn't open the door for unlimited recursion could become difficult. If there is a high water mark of how many rows are allowed, making use of a procedure to enforce that is your best bet. I've seen circular logs where the procedure will update existing records, instead of deleting and inserting something new. :)
Chris Meech I am Canadian. [heard in a local bar] In theory there is no difference between theory and practice. In practice there is. [Yogi Berra] posting about Crystal Reports here is like discussing gay marriage on a catholic church’s website.[Nishant Sivakumar]
-
Thanks for the reply, but my question was not how to accomplish the removal but wether my solution using a stored procedure was OK.
.: I love it when a plan comes together :. http://www.zonderpunt.nl
I do wonder about the business reason to enforce this record limit, that would drive the decision to use a proc or a scheduled job.
Never underestimate the power of human stupidity RAH
-
Hey guys, I have a SQL Server database in which I want to store history data for a couple of object. However, I don't want the table to keep 'old' data. Therefore I want to have a maximum amount of x (in my case 100) records. Since I have 5 objects available the max numer of records in my table cannot be larger than 500. The way I accomplish this, is to insert records using a stored procedure. The stored procedure checks the amount of records available for that object. If the amount is larger than 99 it removes (recordamount - 99) records. Then the stored procedure inserts the new value. Is there a neater way to accomplish this? Thanks a lot! Eduard
.: I love it when a plan comes together :. http://www.zonderpunt.nl
I just remembered a third-party system I had to work with a few years ago. There was a particular table that always held 8000 records -- they got reused.
-
I do wonder about the business reason to enforce this record limit, that would drive the decision to use a proc or a scheduled job.
Never underestimate the power of human stupidity RAH
Hey! I have 8 servers which I want to report their performance. I use performance counters to aquire system information. Depending on the counter's category and name, I want to counter to store it's value every second, or (if less important) with a lower interval (say a minute). I want to keep the 100 latest inserted records per performance counter to report the machine's performance. Since some counters store their value each second the database table may grow rapidly. I expect about 160 performance counters running at the same time measuring performance of 8 different servers. So there's no need to guarantee a max number of records, I can select the latest x records when reporting and a job may clean up the 'old' data. I also may want to switch to keeping the latest 1000 records or something but I don't know about that yet. I think the job is the best performing solution for this... Thanks guys!
.: I love it when a plan comes together :. http://www.zonderpunt.nl
-
The only problem with a periodic scheduled job is that you may temporarily find yourself with more rows in the table than you are supposed to have. A The scheduled job runs and leaves 100 rows in the table B I insert another row: there are now 101 rows in the table C At some later point, the job runs again and leaves 100 rows in the table In between points B and C, you have 101 rows in the table. That may not matter but it may be important. Even if the job is running frequently, you cannot guarantee that a query won't "see" the table between B and C and produce a spurious result. So, depending on the requirement a scheduled job may be OK if you don't need to guarantee that the row count limit will always be enforced, otherwise a stored proc or a trigger may be the best way to go (although I am not a fan of triggers in general).
Thanks David! I don't need to guarantee a max number of records. This[^] post explains what I want my software to do
.: I love it when a plan comes together :. http://www.zonderpunt.nl
-
Hey guys, I have a SQL Server database in which I want to store history data for a couple of object. However, I don't want the table to keep 'old' data. Therefore I want to have a maximum amount of x (in my case 100) records. Since I have 5 objects available the max numer of records in my table cannot be larger than 500. The way I accomplish this, is to insert records using a stored procedure. The stored procedure checks the amount of records available for that object. If the amount is larger than 99 it removes (recordamount - 99) records. Then the stored procedure inserts the new value. Is there a neater way to accomplish this? Thanks a lot! Eduard
.: I love it when a plan comes together :. http://www.zonderpunt.nl
The neatest way would be to pre-allocate (insert) 100 (or 500) records in your table. Your stored proc should simply update the oldest record using an implicit autocommit transaction. The data type used to determine the oldest record depends on the possible update frequency. This guarantees your original request of maintaining max x records at all times.
Dwayne J. Baldwin
-
Hey! I have 8 servers which I want to report their performance. I use performance counters to aquire system information. Depending on the counter's category and name, I want to counter to store it's value every second, or (if less important) with a lower interval (say a minute). I want to keep the 100 latest inserted records per performance counter to report the machine's performance. Since some counters store their value each second the database table may grow rapidly. I expect about 160 performance counters running at the same time measuring performance of 8 different servers. So there's no need to guarantee a max number of records, I can select the latest x records when reporting and a job may clean up the 'old' data. I also may want to switch to keeping the latest 1000 records or something but I don't know about that yet. I think the job is the best performing solution for this... Thanks guys!
.: I love it when a plan comes together :. http://www.zonderpunt.nl
Unfortunately, the very act of recording performance data has an impact on system performance. The best approach imho for performance reporting, if possible, is to periodically do performance analysis on the business data recorded by the system during the normal operation.
Kevin Rucker, Application Programmer QSS Group, Inc. United States Coast Guard OSC Kevin.D.Rucker@uscg.mil "Programming is an art form that fights back." -- Chad Hower
-
This would be a good example of when to *not* use a trigger. Likely your trigger code is going to select on the very table table that fired the trigger in the first place. Ensuring that this doesn't open the door for unlimited recursion could become difficult. If there is a high water mark of how many rows are allowed, making use of a procedure to enforce that is your best bet. I've seen circular logs where the procedure will update existing records, instead of deleting and inserting something new. :)
Chris Meech I am Canadian. [heard in a local bar] In theory there is no difference between theory and practice. In practice there is. [Yogi Berra] posting about Crystal Reports here is like discussing gay marriage on a catholic church’s website.[Nishant Sivakumar]
Well, if your ON INSERT Trigger only deletes records - you aren't going to get a circularity problem.
-
Hey guys, I have a SQL Server database in which I want to store history data for a couple of object. However, I don't want the table to keep 'old' data. Therefore I want to have a maximum amount of x (in my case 100) records. Since I have 5 objects available the max numer of records in my table cannot be larger than 500. The way I accomplish this, is to insert records using a stored procedure. The stored procedure checks the amount of records available for that object. If the amount is larger than 99 it removes (recordamount - 99) records. Then the stored procedure inserts the new value. Is there a neater way to accomplish this? Thanks a lot! Eduard
.: I love it when a plan comes together :. http://www.zonderpunt.nl
Check out RRDtool http://oss.oetiker.ch/rrdtool/[^] It's exactly what you need, it's a round robin database (i.e. when it has reached the end, it overwrites the first entries) and it is specifically designed for graphing sensors from various sources (server load, network traffic, temperature sensors, etc). Good luck!
-
Check out RRDtool http://oss.oetiker.ch/rrdtool/[^] It's exactly what you need, it's a round robin database (i.e. when it has reached the end, it overwrites the first entries) and it is specifically designed for graphing sensors from various sources (server load, network traffic, temperature sensors, etc). Good luck!
Cool, i'll take a peak, thanks a lot!
.: I love it when a plan comes together :. http://www.zonderpunt.nl