Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Code Project
  1. Home
  2. Database & SysAdmin
  3. Database
  4. Explain Explain for me!

Explain Explain for me!

Scheduled Pinned Locked Moved Database
databaseoraclevisual-studiocryptographycode-review
6 Posts 3 Posters 0 Views 1 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • C Offline
    C Offline
    Chris Meech
    wrote on last edited by
    #1

    Looking for some expertise with explains. I've a database procedure that is refusing to behave properly since it has been migrated from Oracle 9i to Oracle 10g. It runs completely in under 3 minutes on my Oracle 9i database, but on 10g it will take either 32 minutes or 10 minutes to complete depending upon whether the statistics are estimated or the table is analyzed. I've been using the DBMS_PROFILER and have narrowed it down to a for loop involving a cursor. However when I explain the cursor, I not only have two different plans, 9i vs 10g, but the costs are completely different, except that 10g is the much reduced costs!

    Oracle 9i EXPLAIN
    Operation Options Object Optimizer


    SELECT STATEMENT Cost = 1405 CHOOSE
    FILTER
    TABLE ACCESS FULL TRANSACTION_TABLE ANALYZED
    FILTER
    NESTED LOOPS OUTER
    MERGE JOIN CARTESIAN
    FILTER
    NESTED LOOPS OUTER
    INDEX UNIQUE SCAN XPKTRANSACTION ANALYZED
    INDEX RANGE SCAN XPKTRANSACTIONLOT ANALYZED
    BUFFER SORT
    INDEX RANGE SCAN XPKTRANSACTION ANALYZED
    INDEX RANGE SCAN XPKTRANSACTIONLOT ANALYZED

    Oracle 10g EXPLAIN
    Operation Options Object Optimize


    SELECT STATEMENT Cost = 374 ALL_ROWS
    FILTER
    TABLE ACCESS BY INDEX ROWID TRANSACTION_TABLE ANALYZED
    INDEX RANGE SCAN IXTRANSACTION_FIFO ANALYZED
    FILTER
    HASH JOIN RIGHT OUTER
    INDEX RANGE SCAN XPKTRANSACTIONLOT ANALYZED
    NESTED LOOPS
    FILTER
    NESTED LOOPS OUTER
    INDEX UNIQUE SCAN XPKTRANSACTION ANALYZED
    INDEX RANGE SCAN XPKTRANSACTIONLOT ANALYZED
    INDEX RANGE SCAN XPKTRANSACTION ANALYZED

    Anybody with some good explain foo that could suggest some things to try in order to

    J 1 Reply Last reply
    0
    • C Chris Meech

      Looking for some expertise with explains. I've a database procedure that is refusing to behave properly since it has been migrated from Oracle 9i to Oracle 10g. It runs completely in under 3 minutes on my Oracle 9i database, but on 10g it will take either 32 minutes or 10 minutes to complete depending upon whether the statistics are estimated or the table is analyzed. I've been using the DBMS_PROFILER and have narrowed it down to a for loop involving a cursor. However when I explain the cursor, I not only have two different plans, 9i vs 10g, but the costs are completely different, except that 10g is the much reduced costs!

      Oracle 9i EXPLAIN
      Operation Options Object Optimizer


      SELECT STATEMENT Cost = 1405 CHOOSE
      FILTER
      TABLE ACCESS FULL TRANSACTION_TABLE ANALYZED
      FILTER
      NESTED LOOPS OUTER
      MERGE JOIN CARTESIAN
      FILTER
      NESTED LOOPS OUTER
      INDEX UNIQUE SCAN XPKTRANSACTION ANALYZED
      INDEX RANGE SCAN XPKTRANSACTIONLOT ANALYZED
      BUFFER SORT
      INDEX RANGE SCAN XPKTRANSACTION ANALYZED
      INDEX RANGE SCAN XPKTRANSACTIONLOT ANALYZED

      Oracle 10g EXPLAIN
      Operation Options Object Optimize


      SELECT STATEMENT Cost = 374 ALL_ROWS
      FILTER
      TABLE ACCESS BY INDEX ROWID TRANSACTION_TABLE ANALYZED
      INDEX RANGE SCAN IXTRANSACTION_FIFO ANALYZED
      FILTER
      HASH JOIN RIGHT OUTER
      INDEX RANGE SCAN XPKTRANSACTIONLOT ANALYZED
      NESTED LOOPS
      FILTER
      NESTED LOOPS OUTER
      INDEX UNIQUE SCAN XPKTRANSACTION ANALYZED
      INDEX RANGE SCAN XPKTRANSACTIONLOT ANALYZED
      INDEX RANGE SCAN XPKTRANSACTION ANALYZED

      Anybody with some good explain foo that could suggest some things to try in order to

      J Offline
      J Offline
      Jorgen Andersson
      wrote on last edited by
      #2

      Not easy to say as I don't know the size of the tables or how the query looks like I would also like to see the cost, card and byte for every level in the plan. But, more often than one might believe, a full table scan might be the faster choice. So try the query on the 10g with a FULL hint. Or as [Tom Kyte](http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:9422487749968)[[^](http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:9422487749968 "New Window")] has said: "full scans are NOT ALWAYS evil, indexes are NOT ALWAYS good"

      "When did ignorance become a point of view" - Dilbert

      M C 2 Replies Last reply
      0
      • J Jorgen Andersson

        Not easy to say as I don't know the size of the tables or how the query looks like I would also like to see the cost, card and byte for every level in the plan. But, more often than one might believe, a full table scan might be the faster choice. So try the query on the 10g with a FULL hint. Or as [Tom Kyte](http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:9422487749968)[[^](http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:9422487749968 "New Window")] has said: "full scans are NOT ALWAYS evil, indexes are NOT ALWAYS good"

        "When did ignorance become a point of view" - Dilbert

        M Offline
        M Offline
        Mycroft Holmes
        wrote on last edited by
        #3

        Jörgen Andersson wrote:

        Or as Tom Kyte[^] has said: "full scans are NOT ALWAYS evil, indexes are NOT ALWAYS good"

        For many years I tried to nail down the best practice when indexing and tuning a database (generally sql server). Every time I had a DBA cornered I would quiz them on this subject and every time I would get back the namby pamby response "it depends", what sort of response is that, I want to lay down immutable rules that say this is how to set up the indexing, "it depends" just does not cut it. Ah thats better, needed that, it's been slow day here.

        Never underestimate the power of human stupidity RAH

        J 1 Reply Last reply
        0
        • M Mycroft Holmes

          Jörgen Andersson wrote:

          Or as Tom Kyte[^] has said: "full scans are NOT ALWAYS evil, indexes are NOT ALWAYS good"

          For many years I tried to nail down the best practice when indexing and tuning a database (generally sql server). Every time I had a DBA cornered I would quiz them on this subject and every time I would get back the namby pamby response "it depends", what sort of response is that, I want to lay down immutable rules that say this is how to set up the indexing, "it depends" just does not cut it. Ah thats better, needed that, it's been slow day here.

          Never underestimate the power of human stupidity RAH

          J Offline
          J Offline
          Jorgen Andersson
          wrote on last edited by
          #4

          A DBA that doesn't say "it depends", hasn't been presented a specific enough case. But I have found one "rule" that hasn't failed me yet when it's about aggregating data. It's all about minimizing the amount of data (in bytes, not necessarily rows) that the db has to handle. Say for example that you want to join together several tables, some holding descriptive texts and some other holding data that should be aggregated. If you make a query that makes the joins and the aggregation in one go, then the db will join first and aggregate later, creating millions of rows with text that will just be grouped when the aggregation happens. Creating all this in memory, or in worst case the HD, takes time. Instead I'm breaking down all queries into smaller parts that I materialize using subquery factoring (CTE for you SQLServer people) and then I join the results from them together. So I'm making one or more queries that aggregate the data, and one or more separate queries for the descriptions, and then I join the aggregated results together in the end. This method is often magnitudes faster than a onepiece query.

          "When did ignorance become a point of view" - Dilbert

          C 1 Reply Last reply
          0
          • J Jorgen Andersson

            A DBA that doesn't say "it depends", hasn't been presented a specific enough case. But I have found one "rule" that hasn't failed me yet when it's about aggregating data. It's all about minimizing the amount of data (in bytes, not necessarily rows) that the db has to handle. Say for example that you want to join together several tables, some holding descriptive texts and some other holding data that should be aggregated. If you make a query that makes the joins and the aggregation in one go, then the db will join first and aggregate later, creating millions of rows with text that will just be grouped when the aggregation happens. Creating all this in memory, or in worst case the HD, takes time. Instead I'm breaking down all queries into smaller parts that I materialize using subquery factoring (CTE for you SQLServer people) and then I join the results from them together. So I'm making one or more queries that aggregate the data, and one or more separate queries for the descriptions, and then I join the aggregated results together in the end. This method is often magnitudes faster than a onepiece query.

            "When did ignorance become a point of view" - Dilbert

            C Offline
            C Offline
            Chris Meech
            wrote on last edited by
            #5

            :thumbsup: That's one of my "rules" too. :)

            Chris Meech I am Canadian. [heard in a local bar] In theory there is no difference between theory and practice. In practice there is. [Yogi Berra] posting about Crystal Reports here is like discussing gay marriage on a catholic church’s website.[Nishant Sivakumar]

            1 Reply Last reply
            0
            • J Jorgen Andersson

              Not easy to say as I don't know the size of the tables or how the query looks like I would also like to see the cost, card and byte for every level in the plan. But, more often than one might believe, a full table scan might be the faster choice. So try the query on the 10g with a FULL hint. Or as [Tom Kyte](http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:9422487749968)[[^](http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:9422487749968 "New Window")] has said: "full scans are NOT ALWAYS evil, indexes are NOT ALWAYS good"

              "When did ignorance become a point of view" - Dilbert

              C Offline
              C Offline
              Chris Meech
              wrote on last edited by
              #6

              I'm going to try different hints for the cursor to see what happens. Thanks. Ultimately I expect to have to re-develop this whole procedure. It was originally written for Oracle 7 and then had several changes when migrated to 8. Since then it has not been touched, but I think I'll be heavily testing over the holidays. :)

              Chris Meech I am Canadian. [heard in a local bar] In theory there is no difference between theory and practice. In practice there is. [Yogi Berra] posting about Crystal Reports here is like discussing gay marriage on a catholic church’s website.[Nishant Sivakumar]

              1 Reply Last reply
              0
              Reply
              • Reply as topic
              Log in to reply
              • Oldest to Newest
              • Newest to Oldest
              • Most Votes


              • Login

              • Don't have an account? Register

              • Login or register to search.
              • First post
                Last post
              0
              • Categories
              • Recent
              • Tags
              • Popular
              • World
              • Users
              • Groups