Explain Explain for me!
-
Looking for some expertise with explains. I've a database procedure that is refusing to behave properly since it has been migrated from Oracle 9i to Oracle 10g. It runs completely in under 3 minutes on my Oracle 9i database, but on 10g it will take either 32 minutes or 10 minutes to complete depending upon whether the statistics are estimated or the table is analyzed. I've been using the DBMS_PROFILER and have narrowed it down to a for loop involving a cursor. However when I explain the cursor, I not only have two different plans, 9i vs 10g, but the costs are completely different, except that 10g is the much reduced costs!
Oracle 9i EXPLAIN
Operation Options Object Optimizer
SELECT STATEMENT Cost = 1405 CHOOSE
FILTER
TABLE ACCESS FULL TRANSACTION_TABLE ANALYZED
FILTER
NESTED LOOPS OUTER
MERGE JOIN CARTESIAN
FILTER
NESTED LOOPS OUTER
INDEX UNIQUE SCAN XPKTRANSACTION ANALYZED
INDEX RANGE SCAN XPKTRANSACTIONLOT ANALYZED
BUFFER SORT
INDEX RANGE SCAN XPKTRANSACTION ANALYZED
INDEX RANGE SCAN XPKTRANSACTIONLOT ANALYZEDOracle 10g EXPLAIN
Operation Options Object Optimize
SELECT STATEMENT Cost = 374 ALL_ROWS
FILTER
TABLE ACCESS BY INDEX ROWID TRANSACTION_TABLE ANALYZED
INDEX RANGE SCAN IXTRANSACTION_FIFO ANALYZED
FILTER
HASH JOIN RIGHT OUTER
INDEX RANGE SCAN XPKTRANSACTIONLOT ANALYZED
NESTED LOOPS
FILTER
NESTED LOOPS OUTER
INDEX UNIQUE SCAN XPKTRANSACTION ANALYZED
INDEX RANGE SCAN XPKTRANSACTIONLOT ANALYZED
INDEX RANGE SCAN XPKTRANSACTION ANALYZEDAnybody with some good explain foo that could suggest some things to try in order to
-
Looking for some expertise with explains. I've a database procedure that is refusing to behave properly since it has been migrated from Oracle 9i to Oracle 10g. It runs completely in under 3 minutes on my Oracle 9i database, but on 10g it will take either 32 minutes or 10 minutes to complete depending upon whether the statistics are estimated or the table is analyzed. I've been using the DBMS_PROFILER and have narrowed it down to a for loop involving a cursor. However when I explain the cursor, I not only have two different plans, 9i vs 10g, but the costs are completely different, except that 10g is the much reduced costs!
Oracle 9i EXPLAIN
Operation Options Object Optimizer
SELECT STATEMENT Cost = 1405 CHOOSE
FILTER
TABLE ACCESS FULL TRANSACTION_TABLE ANALYZED
FILTER
NESTED LOOPS OUTER
MERGE JOIN CARTESIAN
FILTER
NESTED LOOPS OUTER
INDEX UNIQUE SCAN XPKTRANSACTION ANALYZED
INDEX RANGE SCAN XPKTRANSACTIONLOT ANALYZED
BUFFER SORT
INDEX RANGE SCAN XPKTRANSACTION ANALYZED
INDEX RANGE SCAN XPKTRANSACTIONLOT ANALYZEDOracle 10g EXPLAIN
Operation Options Object Optimize
SELECT STATEMENT Cost = 374 ALL_ROWS
FILTER
TABLE ACCESS BY INDEX ROWID TRANSACTION_TABLE ANALYZED
INDEX RANGE SCAN IXTRANSACTION_FIFO ANALYZED
FILTER
HASH JOIN RIGHT OUTER
INDEX RANGE SCAN XPKTRANSACTIONLOT ANALYZED
NESTED LOOPS
FILTER
NESTED LOOPS OUTER
INDEX UNIQUE SCAN XPKTRANSACTION ANALYZED
INDEX RANGE SCAN XPKTRANSACTIONLOT ANALYZED
INDEX RANGE SCAN XPKTRANSACTION ANALYZEDAnybody with some good explain foo that could suggest some things to try in order to
Not easy to say as I don't know the size of the tables or how the query looks like I would also like to see the cost, card and byte for every level in the plan. But, more often than one might believe, a full table scan might be the faster choice. So try the query on the 10g with a FULL hint. Or as
[Tom Kyte](http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:9422487749968)[[^](http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:9422487749968 "New Window")]
has said: "full scans are NOT ALWAYS evil, indexes are NOT ALWAYS good""When did ignorance become a point of view" - Dilbert
-
Not easy to say as I don't know the size of the tables or how the query looks like I would also like to see the cost, card and byte for every level in the plan. But, more often than one might believe, a full table scan might be the faster choice. So try the query on the 10g with a FULL hint. Or as
[Tom Kyte](http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:9422487749968)[[^](http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:9422487749968 "New Window")]
has said: "full scans are NOT ALWAYS evil, indexes are NOT ALWAYS good""When did ignorance become a point of view" - Dilbert
Jörgen Andersson wrote:
Or as Tom Kyte[^] has said: "full scans are NOT ALWAYS evil, indexes are NOT ALWAYS good"
For many years I tried to nail down the best practice when indexing and tuning a database (generally sql server). Every time I had a DBA cornered I would quiz them on this subject and every time I would get back the namby pamby response "it depends", what sort of response is that, I want to lay down immutable rules that say this is how to set up the indexing, "it depends" just does not cut it. Ah thats better, needed that, it's been slow day here.
Never underestimate the power of human stupidity RAH
-
Jörgen Andersson wrote:
Or as Tom Kyte[^] has said: "full scans are NOT ALWAYS evil, indexes are NOT ALWAYS good"
For many years I tried to nail down the best practice when indexing and tuning a database (generally sql server). Every time I had a DBA cornered I would quiz them on this subject and every time I would get back the namby pamby response "it depends", what sort of response is that, I want to lay down immutable rules that say this is how to set up the indexing, "it depends" just does not cut it. Ah thats better, needed that, it's been slow day here.
Never underestimate the power of human stupidity RAH
A DBA that doesn't say "it depends", hasn't been presented a specific enough case. But I have found one "rule" that hasn't failed me yet when it's about aggregating data. It's all about minimizing the amount of data (in bytes, not necessarily rows) that the db has to handle. Say for example that you want to join together several tables, some holding descriptive texts and some other holding data that should be aggregated. If you make a query that makes the joins and the aggregation in one go, then the db will join first and aggregate later, creating millions of rows with text that will just be grouped when the aggregation happens. Creating all this in memory, or in worst case the HD, takes time. Instead I'm breaking down all queries into smaller parts that I materialize using subquery factoring (CTE for you SQLServer people) and then I join the results from them together. So I'm making one or more queries that aggregate the data, and one or more separate queries for the descriptions, and then I join the aggregated results together in the end. This method is often magnitudes faster than a onepiece query.
"When did ignorance become a point of view" - Dilbert
-
A DBA that doesn't say "it depends", hasn't been presented a specific enough case. But I have found one "rule" that hasn't failed me yet when it's about aggregating data. It's all about minimizing the amount of data (in bytes, not necessarily rows) that the db has to handle. Say for example that you want to join together several tables, some holding descriptive texts and some other holding data that should be aggregated. If you make a query that makes the joins and the aggregation in one go, then the db will join first and aggregate later, creating millions of rows with text that will just be grouped when the aggregation happens. Creating all this in memory, or in worst case the HD, takes time. Instead I'm breaking down all queries into smaller parts that I materialize using subquery factoring (CTE for you SQLServer people) and then I join the results from them together. So I'm making one or more queries that aggregate the data, and one or more separate queries for the descriptions, and then I join the aggregated results together in the end. This method is often magnitudes faster than a onepiece query.
"When did ignorance become a point of view" - Dilbert
:thumbsup: That's one of my "rules" too. :)
Chris Meech I am Canadian. [heard in a local bar] In theory there is no difference between theory and practice. In practice there is. [Yogi Berra] posting about Crystal Reports here is like discussing gay marriage on a catholic church’s website.[Nishant Sivakumar]
-
Not easy to say as I don't know the size of the tables or how the query looks like I would also like to see the cost, card and byte for every level in the plan. But, more often than one might believe, a full table scan might be the faster choice. So try the query on the 10g with a FULL hint. Or as
[Tom Kyte](http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:9422487749968)[[^](http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:9422487749968 "New Window")]
has said: "full scans are NOT ALWAYS evil, indexes are NOT ALWAYS good""When did ignorance become a point of view" - Dilbert
I'm going to try different hints for the cursor to see what happens. Thanks. Ultimately I expect to have to re-develop this whole procedure. It was originally written for Oracle 7 and then had several changes when migrated to 8. Since then it has not been touched, but I think I'll be heavily testing over the holidays. :)
Chris Meech I am Canadian. [heard in a local bar] In theory there is no difference between theory and practice. In practice there is. [Yogi Berra] posting about Crystal Reports here is like discussing gay marriage on a catholic church’s website.[Nishant Sivakumar]