Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. Database & SysAdmin
  3. Database
  4. How to spread traffic on a table

How to spread traffic on a table

Scheduled Pinned Locked Moved Database
databasequestionsaleshelptutorial
21 Posts 7 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • J Johan Hakkesteegt

    I am not sure if I am phrasing the question right, which is probably also why I haven't been able to find an answer anywhere. The situation is as follows: I have an ERP system using an MS SQL 2005 database. There are several tables in the database that contain document data (i.e. orders, deliveries, invoices, etc). These tables get a lot of read and write action under normal use of the ERP system. There are several heavy ad hoc queries saved in the system, and also some user applications and timed console apps that run heavy queries (sales reports, purchase forecasts, and such). The result is that in daily use such a strain is placed on the document tables that we get constant timeouts, and/or applications (and the ERP system itself) grinding to a halt when too many users / applications are using the system at the same time. The question is: Considering that I can hardly or not at all touch database settings, and I certainly can not fool around with the table structures themselves, I am thinking of making our applications and ad hoc queries not perform read operations directly on the tables in question. Also most of the data should be up to date to within say an hour. What are my options ? Would using views or a synchronized db help ? Any other solution ? Cheers, Johan

    My advice is free, and you may get what you paid for.

    J Offline
    J Offline
    Jorgen Andersson
    wrote on last edited by
    #6

    If you are joining together a lot of tables and then make an aggregation on that join in one big query, you'll take a serious performance hit. Have a look if it's possible to do the aggregation separately and then make the joins. As similar to this:

    WITH SalAgg as (
    SELECT SUM(sal) sal
    ,DeptID
    From Emp
    )
    Select deptid
    ,sal
    ,More info
    ,even more info
    from SalAgg s
    join Dept d
    On s.deptid = d.deptid

    If you dont like CTE[^]s you can of course use inline views instead. Do also take a look at Indexed views[^].

    List of common misconceptions

    modified on Tuesday, March 22, 2011 8:50 AM

    J 1 Reply Last reply
    0
    • M Mycroft Holmes

      There are a huge number of things you can do to tune the performance on a database. It sounds like you have a major corporate database and no real skills (DBA level) to service the thing. I STRONGLY suggest hiring a consultant (finding a good one can be a challenge) to look at the infrastructure.

      Never underestimate the power of human stupidity RAH

      J Offline
      J Offline
      Johan Hakkesteegt
      wrote on last edited by
      #7

      I think you got the wrong end of the stick. Admittedly, I don't have the skills of a dedicated database administrator, but I am not completely useless with MS SQL. Our company is just a small (50 people) outfit, our ERP system is a small SAP product (Hence, no touching db settings and such), and the database (settings, structure, etc) in question is designed by SAP and specifically dedicated to the ERP system. Engaging a consultant, we did in a sense, by having SAP support taking a look at it, but they could find nothing wrong, and only suggested some basic maintenance tasks (which we already use). The problem appears to be that normal daily use, ad hoc queries, and all the automation we have built around the system strain the database beyond its original design. This is why I am looking into lessening the strain, instead of tuning the db.

      My advice is free, and you may get what you paid for.

      1 Reply Last reply
      0
      • J Johan Hakkesteegt

        I am not sure if I am phrasing the question right, which is probably also why I haven't been able to find an answer anywhere. The situation is as follows: I have an ERP system using an MS SQL 2005 database. There are several tables in the database that contain document data (i.e. orders, deliveries, invoices, etc). These tables get a lot of read and write action under normal use of the ERP system. There are several heavy ad hoc queries saved in the system, and also some user applications and timed console apps that run heavy queries (sales reports, purchase forecasts, and such). The result is that in daily use such a strain is placed on the document tables that we get constant timeouts, and/or applications (and the ERP system itself) grinding to a halt when too many users / applications are using the system at the same time. The question is: Considering that I can hardly or not at all touch database settings, and I certainly can not fool around with the table structures themselves, I am thinking of making our applications and ad hoc queries not perform read operations directly on the tables in question. Also most of the data should be up to date to within say an hour. What are my options ? Would using views or a synchronized db help ? Any other solution ? Cheers, Johan

        My advice is free, and you may get what you paid for.

        W Offline
        W Offline
        Wendelius
        wrote on last edited by
        #8

        There are lot's of possibilities. The first question I have in mind is that are you using an ERP product or is the system built in-house? Depending on that, the possibilities are perhaps narrowed because of responsibility issues. Depending on the situation I would consider (and talk with the responsible party) tools such as (in no order): - standby database in read only mode: data not up-to-date, but near (depending on the backup stategy), all ad hocs go here - using filegroups, spread your data across multiple devices - partitioning, again spread your data across multiple devices - replication is of course one option as already mentioned And especially if the system is built in-house: - pinpoint the bottlencks - use smart indexing - reconsider the design even if it affects application logic etc. The list most certainly isn't complete but just to raise questions and bring different tools into the discussion at you place. mika

        The need to optimize rises from a bad design.My articles[^]

        J 1 Reply Last reply
        0
        • W Wendelius

          There are lot's of possibilities. The first question I have in mind is that are you using an ERP product or is the system built in-house? Depending on that, the possibilities are perhaps narrowed because of responsibility issues. Depending on the situation I would consider (and talk with the responsible party) tools such as (in no order): - standby database in read only mode: data not up-to-date, but near (depending on the backup stategy), all ad hocs go here - using filegroups, spread your data across multiple devices - partitioning, again spread your data across multiple devices - replication is of course one option as already mentioned And especially if the system is built in-house: - pinpoint the bottlencks - use smart indexing - reconsider the design even if it affects application logic etc. The list most certainly isn't complete but just to raise questions and bring different tools into the discussion at you place. mika

          The need to optimize rises from a bad design.My articles[^]

          J Offline
          J Offline
          Johan Hakkesteegt
          wrote on last edited by
          #9

          Thanks a lot for these tips. We are in fact using an ERP product, so I am indeed not allowed to touch the db really. I am going to look into replication for now, but in a couple of years we are up for a new server (hardware), and I will start looking into spreading file groups over several disks then.

          My advice is free, and you may get what you paid for.

          W D 2 Replies Last reply
          0
          • J Jorgen Andersson

            If you are joining together a lot of tables and then make an aggregation on that join in one big query, you'll take a serious performance hit. Have a look if it's possible to do the aggregation separately and then make the joins. As similar to this:

            WITH SalAgg as (
            SELECT SUM(sal) sal
            ,DeptID
            From Emp
            )
            Select deptid
            ,sal
            ,More info
            ,even more info
            from SalAgg s
            join Dept d
            On s.deptid = d.deptid

            If you dont like CTE[^]s you can of course use inline views instead. Do also take a look at Indexed views[^].

            List of common misconceptions

            modified on Tuesday, March 22, 2011 8:50 AM

            J Offline
            J Offline
            Johan Hakkesteegt
            wrote on last edited by
            #10

            I use both normal joins and inline views, but I didn't know that using aggregation with straightforward joins performs different from inline views. Also I have never seen this WITH structure you used in your example. Thanks very much for that. I am going to check a lot of our apps and queries now, to see if I can streamline them this way.

            My advice is free, and you may get what you paid for.

            J 1 Reply Last reply
            0
            • J Johan Hakkesteegt

              Thanks a lot for these tips. We are in fact using an ERP product, so I am indeed not allowed to touch the db really. I am going to look into replication for now, but in a couple of years we are up for a new server (hardware), and I will start looking into spreading file groups over several disks then.

              My advice is free, and you may get what you paid for.

              W Offline
              W Offline
              Wendelius
              wrote on last edited by
              #11

              No problem. Before going to replication which requires a bit more administration I would suggest to have a look at the standby databases in SQL Server. I'm suggesting this since from administrative point of view you won't have issues if the structure of the db is changed etc. Of course if you want to modify the data when it's transferred from primary to secondary database then replication is a better option.

              The need to optimize rises from a bad design.My articles[^]

              J 1 Reply Last reply
              0
              • C Corporal Agarn

                Sorry I could not give more info but I have only played around with publish replication (no actual usage). We do log shipping which also could work but I think the publish would be better for you.

                J Offline
                J Offline
                Johan Hakkesteegt
                wrote on last edited by
                #12

                Don't worry about it, I don't mind doing the research. That nudge in the right direction was really all I needed.

                My advice is free, and you may get what you paid for.

                1 Reply Last reply
                0
                • J Johan Hakkesteegt

                  Thanks a lot for these tips. We are in fact using an ERP product, so I am indeed not allowed to touch the db really. I am going to look into replication for now, but in a couple of years we are up for a new server (hardware), and I will start looking into spreading file groups over several disks then.

                  My advice is free, and you may get what you paid for.

                  D Offline
                  D Offline
                  David Mujica
                  wrote on last edited by
                  #13

                  Excellent idea to spread the load into filegroups across separtate physical disks. (Notice I say physical disks, if the disks are part of an array and you move filegroups from E: to F: and they are part of the same array, then nothing is really done.) Also, check into SQL Profiler, use this while investigating what your report queries are actually doing. You might be suprised that a report query is not using an index. Sounds like you've gotten a whole pile of advice and you are moving along. Report back what you've found. We all may find it valuable. Good luck.

                  J 1 Reply Last reply
                  0
                  • W Wendelius

                    No problem. Before going to replication which requires a bit more administration I would suggest to have a look at the standby databases in SQL Server. I'm suggesting this since from administrative point of view you won't have issues if the structure of the db is changed etc. Of course if you want to modify the data when it's transferred from primary to secondary database then replication is a better option.

                    The need to optimize rises from a bad design.My articles[^]

                    J Offline
                    J Offline
                    Johan Hakkesteegt
                    wrote on last edited by
                    #14

                    An interesting suggestion. As I understand what I just quickly looked up on standby servers, the principal goal is fail over. However, using such a "backup" database for my purposes would obviously be perfect, as I will only need to read from it. As I also understand it though, the basic idea is (*and which would mean for us*): 1. make a backup (10 - 15 min) 2. copy the backup elsewhere (we wouldn't need to do this) 3. restore the backup to another database (15 - 20 min) That would be approximately 25 - 35 min per operation. How much or little does the operation affect the server performance during that time? Also, does it make sense to do this on one and the same physical server ?

                    My advice is free, and you may get what you paid for.

                    W C 2 Replies Last reply
                    0
                    • D David Mujica

                      Excellent idea to spread the load into filegroups across separtate physical disks. (Notice I say physical disks, if the disks are part of an array and you move filegroups from E: to F: and they are part of the same array, then nothing is really done.) Also, check into SQL Profiler, use this while investigating what your report queries are actually doing. You might be suprised that a report query is not using an index. Sounds like you've gotten a whole pile of advice and you are moving along. Report back what you've found. We all may find it valuable. Good luck.

                      J Offline
                      J Offline
                      Johan Hakkesteegt
                      wrote on last edited by
                      #15

                      You actually just mentioned a problem that we are dealing with as well. We use a RAID 5 (I think ?) disk system. The one that spreads the data out over all physical disks, so any one disk failing is not a show stopper. This means that I can't dedicate a single drive just to my database to increase read/write performance. A mistake I will not make with the next server.

                      My advice is free, and you may get what you paid for.

                      1 Reply Last reply
                      0
                      • J Johan Hakkesteegt

                        An interesting suggestion. As I understand what I just quickly looked up on standby servers, the principal goal is fail over. However, using such a "backup" database for my purposes would obviously be perfect, as I will only need to read from it. As I also understand it though, the basic idea is (*and which would mean for us*): 1. make a backup (10 - 15 min) 2. copy the backup elsewhere (we wouldn't need to do this) 3. restore the backup to another database (15 - 20 min) That would be approximately 25 - 35 min per operation. How much or little does the operation affect the server performance during that time? Also, does it make sense to do this on one and the same physical server ?

                        My advice is free, and you may get what you paid for.

                        W Offline
                        W Offline
                        Wendelius
                        wrote on last edited by
                        #16

                        Exactly, few comments:

                        Johan Hakkesteegt wrote:

                        1. make a backup (10 - 15 min)

                        Log backup

                        Johan Hakkesteegt wrote:

                        2. copy the backup elsewhere (we wouldn't need to do this)

                        This can be automated

                        Johan Hakkesteegt wrote:

                        3. restore the backup to another database (15 - 20 min)

                        As well as this

                        Johan Hakkesteegt wrote:

                        That would be approximately 25 - 35 min per operation

                        Don't know about that, depends on the system. Typically log backup is very fast. Of course the files must be transferred and reapplied (preferrably by SQL Server) so it takes some time, but not much.

                        Johan Hakkesteegt wrote:

                        How much or little does the operation affect the server performance during that time

                        It's a backup so yes it affects performance (a bit, again depending on the system).

                        Johan Hakkesteegt wrote:

                        Also, does it make sense to do this on one and the same physical server ?

                        Well, if you're going to setup a standby, why not create it on a separate server while your at it. Now you gain both new place for ad-hoc queries along with safer environment. Another point why I wouldn't put it on the same server is that you want to have all your resources (CPU, memory, disks) to the OLTP database. If you install the standby to the same physical server the instances will have to share these resources so there's not so much benefit as if they are separated.

                        The need to optimize rises from a bad design.My articles[^]

                        J 1 Reply Last reply
                        0
                        • J Johan Hakkesteegt

                          An interesting suggestion. As I understand what I just quickly looked up on standby servers, the principal goal is fail over. However, using such a "backup" database for my purposes would obviously be perfect, as I will only need to read from it. As I also understand it though, the basic idea is (*and which would mean for us*): 1. make a backup (10 - 15 min) 2. copy the backup elsewhere (we wouldn't need to do this) 3. restore the backup to another database (15 - 20 min) That would be approximately 25 - 35 min per operation. How much or little does the operation affect the server performance during that time? Also, does it make sense to do this on one and the same physical server ?

                          My advice is free, and you may get what you paid for.

                          C Offline
                          C Offline
                          Corporal Agarn
                          wrote on last edited by
                          #17

                          As stated we use log shipping for fail over. This requires a second server where-as replication can be done on the same instance (different database). Since this is an ERP it should not change often thus the replication should not need changed often. The backup (transaction log) copy and restore does not interfere with the production system workings. We have a database being backed up every fifteen minutes and one backed up every eight hours, and several between. It depends on the amount of change to the database. I assume that yours is a typical order entry/production database that would need log shipped more often, say every ten minutes.

                          1 Reply Last reply
                          0
                          • J Johan Hakkesteegt

                            I use both normal joins and inline views, but I didn't know that using aggregation with straightforward joins performs different from inline views. Also I have never seen this WITH structure you used in your example. Thanks very much for that. I am going to check a lot of our apps and queries now, to see if I can streamline them this way.

                            My advice is free, and you may get what you paid for.

                            J Offline
                            J Offline
                            Jorgen Andersson
                            wrote on last edited by
                            #18

                            The problem is that a CTE may create a temporary table (which you will then see as a Worktable in the plan), but it is not guaranteed to do that, the optimizer can still choose to do it the wrong way. And Sqlserver doesn't as far as i know have a materialize hint like in Oracle.

                            List of common misconceptions

                            1 Reply Last reply
                            0
                            • W Wendelius

                              Exactly, few comments:

                              Johan Hakkesteegt wrote:

                              1. make a backup (10 - 15 min)

                              Log backup

                              Johan Hakkesteegt wrote:

                              2. copy the backup elsewhere (we wouldn't need to do this)

                              This can be automated

                              Johan Hakkesteegt wrote:

                              3. restore the backup to another database (15 - 20 min)

                              As well as this

                              Johan Hakkesteegt wrote:

                              That would be approximately 25 - 35 min per operation

                              Don't know about that, depends on the system. Typically log backup is very fast. Of course the files must be transferred and reapplied (preferrably by SQL Server) so it takes some time, but not much.

                              Johan Hakkesteegt wrote:

                              How much or little does the operation affect the server performance during that time

                              It's a backup so yes it affects performance (a bit, again depending on the system).

                              Johan Hakkesteegt wrote:

                              Also, does it make sense to do this on one and the same physical server ?

                              Well, if you're going to setup a standby, why not create it on a separate server while your at it. Now you gain both new place for ad-hoc queries along with safer environment. Another point why I wouldn't put it on the same server is that you want to have all your resources (CPU, memory, disks) to the OLTP database. If you install the standby to the same physical server the instances will have to share these resources so there's not so much benefit as if they are separated.

                              The need to optimize rises from a bad design.My articles[^]

                              J Offline
                              J Offline
                              Johan Hakkesteegt
                              wrote on last edited by
                              #19

                              Ok thanks a lot, I am going to look into this.

                              My advice is free, and you may get what you paid for.

                              W 1 Reply Last reply
                              0
                              • J Johan Hakkesteegt

                                Ok thanks a lot, I am going to look into this.

                                My advice is free, and you may get what you paid for.

                                W Offline
                                W Offline
                                Wendelius
                                wrote on last edited by
                                #20

                                No problem and good luck. :)

                                The need to optimize rises from a bad design.My articles[^]

                                1 Reply Last reply
                                0
                                • J Johan Hakkesteegt

                                  I am not sure if I am phrasing the question right, which is probably also why I haven't been able to find an answer anywhere. The situation is as follows: I have an ERP system using an MS SQL 2005 database. There are several tables in the database that contain document data (i.e. orders, deliveries, invoices, etc). These tables get a lot of read and write action under normal use of the ERP system. There are several heavy ad hoc queries saved in the system, and also some user applications and timed console apps that run heavy queries (sales reports, purchase forecasts, and such). The result is that in daily use such a strain is placed on the document tables that we get constant timeouts, and/or applications (and the ERP system itself) grinding to a halt when too many users / applications are using the system at the same time. The question is: Considering that I can hardly or not at all touch database settings, and I certainly can not fool around with the table structures themselves, I am thinking of making our applications and ad hoc queries not perform read operations directly on the tables in question. Also most of the data should be up to date to within say an hour. What are my options ? Would using views or a synchronized db help ? Any other solution ? Cheers, Johan

                                  My advice is free, and you may get what you paid for.

                                  D Offline
                                  D Offline
                                  darkelv
                                  wrote on last edited by
                                  #21

                                  Database Snapshot? http://msdn.microsoft.com/en-us/library/ms175876.aspx[^] Also consider pre-generate the commonly used ad-hoc queries with heavy usage on the database so as not to tie down the actual database.

                                  1 Reply Last reply
                                  0
                                  Reply
                                  • Reply as topic
                                  Log in to reply
                                  • Oldest to Newest
                                  • Newest to Oldest
                                  • Most Votes


                                  • Login

                                  • Don't have an account? Register

                                  • Login or register to search.
                                  • First post
                                    Last post
                                  0
                                  • Categories
                                  • Recent
                                  • Tags
                                  • Popular
                                  • World
                                  • Users
                                  • Groups