Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. Database & SysAdmin
  3. Database
  4. SQL 2000 to 2008R2 Query Performance

SQL 2000 to 2008R2 Query Performance

Scheduled Pinned Locked Moved Database
databasedesignperformancequestionlearning
12 Posts 3 Posters 2 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • C cjb110

    We've upgraded one of our systems to SQL 2008R2 and certain parts are running like a dog. One statement that would have taken an average of 20-40mins on the old box, took from 11:40 to 00:35. Its not every statement but odd ones. Its a 2 cpu box, and the cpu usage sits a 50%, there's hardly any activity in activity monitor (no significant resource waits etc, no io, or requests) Unfortunatley this didn't happen during conversion, system test or uat, so we've imp'ed, and now hitting these issues:( Has anybody got some hints, places to look, good resources to read up on? One thing we would like to work out, is it a fact, 2008 does things differently and query design, indexes etc need to be re-developed/analysed?

    L Offline
    L Offline
    Lost User
    wrote on last edited by
    #2

    Rebuild all indexes and do update statistics and then try again.

    C 1 Reply Last reply
    0
    • C cjb110

      We've upgraded one of our systems to SQL 2008R2 and certain parts are running like a dog. One statement that would have taken an average of 20-40mins on the old box, took from 11:40 to 00:35. Its not every statement but odd ones. Its a 2 cpu box, and the cpu usage sits a 50%, there's hardly any activity in activity monitor (no significant resource waits etc, no io, or requests) Unfortunatley this didn't happen during conversion, system test or uat, so we've imp'ed, and now hitting these issues:( Has anybody got some hints, places to look, good resources to read up on? One thing we would like to work out, is it a fact, 2008 does things differently and query design, indexes etc need to be re-developed/analysed?

      D Offline
      D Offline
      David Mujica
      wrote on last edited by
      #3

      Since it seems that you can isolate it down to a SQL statement, then you might want to put that SQL statement in SQL Server manager and under the query menu, enable the option to "Include Actual Execution Plan". After running the statement, you should get an idea of where the problem lies. You can also do things like including statistics. Good luck. :thumbsup:

      C 1 Reply Last reply
      0
      • D David Mujica

        Since it seems that you can isolate it down to a SQL statement, then you might want to put that SQL statement in SQL Server manager and under the query menu, enable the option to "Include Actual Execution Plan". After running the statement, you should get an idea of where the problem lies. You can also do things like including statistics. Good luck. :thumbsup:

        C Offline
        C Offline
        cjb110
        wrote on last edited by
        #4

        In our couple of runs we have identified 2 separate statements, the problem is this system is wiped each run and the data then passed through changes in size, so we probably have not found them all. We've also not been down all the varying processing paths. So I was wondering if in doing a conversion to 2k8, you will always find statements that no longer work efficiently and you just have to find them all and fix. Or if there is something we might have set wrong? If everyone says its the former then fine, I can plan for that, just want to make sure before saying we need to make more changes.

        D 1 Reply Last reply
        0
        • L Lost User

          Rebuild all indexes and do update statistics and then try again.

          C Offline
          C Offline
          cjb110
          wrote on last edited by
          #5

          Rebuilding indexes didn't seem to help. But updating all the stats did, the sample I was looking at now processes in 6s rather than 9h. My concern now is I have Auto Update Stats On, and Update Stats Async, and the way the system works is it that each stage wipes the working tables before processing. So the stats would be invalidated every run. Now we obviously could go through and do manual stat updates, but should we need to? Given we didn't for 2000. Also I think that SQL is memory starved, would this be preventing it from also auto-updating stats?

          1 Reply Last reply
          0
          • C cjb110

            In our couple of runs we have identified 2 separate statements, the problem is this system is wiped each run and the data then passed through changes in size, so we probably have not found them all. We've also not been down all the varying processing paths. So I was wondering if in doing a conversion to 2k8, you will always find statements that no longer work efficiently and you just have to find them all and fix. Or if there is something we might have set wrong? If everyone says its the former then fine, I can plan for that, just want to make sure before saying we need to make more changes.

            D Offline
            D Offline
            David Mujica
            wrote on last edited by
            #6

            Hmmm ... Let's look at this differently. Do you have an example of a SQL statement which took longer to execute under 2008 and have you found a solution ? (For example rebuild index) Are you running SQL 2008 server in some "Compatibility Level" which is causing it work inefficiently? For example, run

            select compatibility_level from sys.databases where name=db_name()

            and you should get 100 for SQL Server 2008. I've recently upgraded an applicaiton from SQL 2005 to SQL 2008 and things seem to be just fine with the queries, etc. I took a full backup from SQL 2005, created a brand new database on our SQL 2008 server and restored the backup. Very straight forward. No complaints. (The applicaiton which accesses the database was also upgraded, but I have no control over that because it is an application which we purchased.) If you need more detailed help, send me a private message and I can point you to a SQL Server professional. You can continue to post your questions here, I will do the best I can. :cool:

            C 1 Reply Last reply
            0
            • D David Mujica

              Hmmm ... Let's look at this differently. Do you have an example of a SQL statement which took longer to execute under 2008 and have you found a solution ? (For example rebuild index) Are you running SQL 2008 server in some "Compatibility Level" which is causing it work inefficiently? For example, run

              select compatibility_level from sys.databases where name=db_name()

              and you should get 100 for SQL Server 2008. I've recently upgraded an applicaiton from SQL 2005 to SQL 2008 and things seem to be just fine with the queries, etc. I took a full backup from SQL 2005, created a brand new database on our SQL 2008 server and restored the backup. Very straight forward. No complaints. (The applicaiton which accesses the database was also upgraded, but I have no control over that because it is an application which we purchased.) If you need more detailed help, send me a private message and I can point you to a SQL Server professional. You can continue to post your questions here, I will do the best I can. :cool:

              C Offline
              C Offline
              cjb110
              wrote on last edited by
              #7

              The Compatibility level is set to 100. We've not tried 80... We do seem to have resolved the issue, the server now has 8Gb, rather than 4Gb. The 2000 server had 4Gb. One query that took hours on 2008, but down to seconds once we added the 8Gb was:

              update outMortgageAsset
              set
              OpeningBalance = z.OpeningBalance
              , ClosingBalance = z.ClosingBalance
              , AvgBalance = z.AvgBalance
              from outMortgageAsset b
              join
              (
              select
              GroupID = a.GroupID
              , MonthDate = k.MonthDate
              , OpeningBalance = sum(k.OpeningBalance)
              , ClosingBalance = sum(k.ClosingBalance)
              , AvgBalance = sum(k.AvgBalance)
              from EIRGroupAccountsToBeTotaled a
              join
              (
              select
              monthdate = b.ForecastMonth
              , accountid = b.AccountID
              , OpeningBalance = isnull(b.CurrentBalance, 0.0)
              , ClosingBalance = isnull(c.CurrentBalance, 0.0)
              , AvgBalance = (isnull(c.CurrentBalance, 0.0)
              + isnull(b.CurrentBalance, 0.0)) / 2.00000000
              from EIRAccountBalancePerPeriod b
              left join EIRAccountBalancePerPeriod c
              on b.ForecastMonth = c.ForecastMonth - 1
              and c.AccountID = b.AccountID
              union all
              select
              ForecastMonth - 1
              , AccountID
              , OpeningBalance = 0
              , ClosingBalance = isnull(CurrentBalance, 0.0)
              , AvgBalance = (CurrentBalance) / 2.00000000
              from EIRAccountBalancePerPeriod a
              where ForecastMonth =
              (
              select min(ForecastMonth)
              from EIRAccountBalancePerPeriod z
              where z.AccountID = a.AccountID
              )
              ) k
              on a.accountid = k.accountid
              group by a.GroupID, k.MonthDate
              ) z
              on b.GroupID = z.GroupID
              and fm = z.MonthDate

              The row counts for the tables: outMortgageAsset: 1148 EIRGroupAccountsToBeTotaled: 1499860, Clustered Index on GroupID, AccountID EIRAccountBalancePerPeriod: 14347829, Non-Clustered on MonthDate, ForcastMonth, AccountID

              D 1 Reply Last reply
              0
              • C cjb110

                The Compatibility level is set to 100. We've not tried 80... We do seem to have resolved the issue, the server now has 8Gb, rather than 4Gb. The 2000 server had 4Gb. One query that took hours on 2008, but down to seconds once we added the 8Gb was:

                update outMortgageAsset
                set
                OpeningBalance = z.OpeningBalance
                , ClosingBalance = z.ClosingBalance
                , AvgBalance = z.AvgBalance
                from outMortgageAsset b
                join
                (
                select
                GroupID = a.GroupID
                , MonthDate = k.MonthDate
                , OpeningBalance = sum(k.OpeningBalance)
                , ClosingBalance = sum(k.ClosingBalance)
                , AvgBalance = sum(k.AvgBalance)
                from EIRGroupAccountsToBeTotaled a
                join
                (
                select
                monthdate = b.ForecastMonth
                , accountid = b.AccountID
                , OpeningBalance = isnull(b.CurrentBalance, 0.0)
                , ClosingBalance = isnull(c.CurrentBalance, 0.0)
                , AvgBalance = (isnull(c.CurrentBalance, 0.0)
                + isnull(b.CurrentBalance, 0.0)) / 2.00000000
                from EIRAccountBalancePerPeriod b
                left join EIRAccountBalancePerPeriod c
                on b.ForecastMonth = c.ForecastMonth - 1
                and c.AccountID = b.AccountID
                union all
                select
                ForecastMonth - 1
                , AccountID
                , OpeningBalance = 0
                , ClosingBalance = isnull(CurrentBalance, 0.0)
                , AvgBalance = (CurrentBalance) / 2.00000000
                from EIRAccountBalancePerPeriod a
                where ForecastMonth =
                (
                select min(ForecastMonth)
                from EIRAccountBalancePerPeriod z
                where z.AccountID = a.AccountID
                )
                ) k
                on a.accountid = k.accountid
                group by a.GroupID, k.MonthDate
                ) z
                on b.GroupID = z.GroupID
                and fm = z.MonthDate

                The row counts for the tables: outMortgageAsset: 1148 EIRGroupAccountsToBeTotaled: 1499860, Clustered Index on GroupID, AccountID EIRAccountBalancePerPeriod: 14347829, Non-Clustered on MonthDate, ForcastMonth, AccountID

                D Offline
                D Offline
                David Mujica
                wrote on last edited by
                #8

                Interesting. So after you added the additional memory, the query executed faster? What OS is the SQL 2008 Server running vs the SQL 2000 Server? (64 vs 32 bit?) I'm thinking that the SQL 2008 server OS was struggling with memory; causing poor performance. Does the system seem to be operating better with the 8GB of memory? It may have been just a memory issue. (Seems like Microsoft OS and Applicaitons need more and more memory with each new release. But memory is cheap these days ...) Hope things are better. :thumbsup:

                C 1 Reply Last reply
                0
                • D David Mujica

                  Interesting. So after you added the additional memory, the query executed faster? What OS is the SQL 2008 Server running vs the SQL 2000 Server? (64 vs 32 bit?) I'm thinking that the SQL 2008 server OS was struggling with memory; causing poor performance. Does the system seem to be operating better with the 8GB of memory? It may have been just a memory issue. (Seems like Microsoft OS and Applicaitons need more and more memory with each new release. But memory is cheap these days ...) Hope things are better. :thumbsup:

                  C Offline
                  C Offline
                  cjb110
                  wrote on last edited by
                  #9

                  Both are 64bit! System's sitting at 93% physical used, with SQL taking ~6gb for its private working set. Unfortunately just running a slightly larger dataset through, and now another earlier statement is showing the same signs. 50% cpu, but nothing else seems to be happening.

                  D 1 Reply Last reply
                  0
                  • C cjb110

                    Both are 64bit! System's sitting at 93% physical used, with SQL taking ~6gb for its private working set. Unfortunately just running a slightly larger dataset through, and now another earlier statement is showing the same signs. 50% cpu, but nothing else seems to be happening.

                    D Offline
                    D Offline
                    David Mujica
                    wrote on last edited by
                    #10

                    Check this guy's article out on High CPU usage on SQL server, sounds like it might help. http://jeffstevenson.karamazovgroup.com/2008/09/identifying-high-cpu-sql-processes.html[^]

                    C 1 Reply Last reply
                    0
                    • D David Mujica

                      Check this guy's article out on High CPU usage on SQL server, sounds like it might help. http://jeffstevenson.karamazovgroup.com/2008/09/identifying-high-cpu-sql-processes.html[^]

                      C Offline
                      C Offline
                      cjb110
                      wrote on last edited by
                      #11

                      Grr, not getting very far. Copied the db back to our dev server which hasn't had the 8gb upgrade, so its still 4gb. Ran the same dataset through, the dev one finished in a reasonable time, the other is still going and now slowed down on yet another part of the process... :(

                      C 1 Reply Last reply
                      0
                      • C cjb110

                        Grr, not getting very far. Copied the db back to our dev server which hasn't had the 8gb upgrade, so its still 4gb. Ran the same dataset through, the dev one finished in a reasonable time, the other is still going and now slowed down on yet another part of the process... :(

                        C Offline
                        C Offline
                        cjb110
                        wrote on last edited by
                        #12

                        Might have found something, a high number of CXPACKET waits. We've set the MAXDOP to 1 to see if we now get consistent performance, if we're lucky it will consistent performance that matches the old server...

                        1 Reply Last reply
                        0
                        Reply
                        • Reply as topic
                        Log in to reply
                        • Oldest to Newest
                        • Newest to Oldest
                        • Most Votes


                        • Login

                        • Don't have an account? Register

                        • Login or register to search.
                        • First post
                          Last post
                        0
                        • Categories
                        • Recent
                        • Tags
                        • Popular
                        • World
                        • Users
                        • Groups