RE: SQL Tuning -- large, long-running outer join query

  • From: "Thomas Jeff" <jeff.thomas@xxxxxxxxxxx>
  • To: "Wolfgang Breitling" <breitliw@xxxxxxxxxxxxx>
  • Date: Thu, 19 Jan 2006 14:14:36 -0500

Wolfgang, Mladen, Paul Drake, Gints, and others:  many thanks for your
replies.

First off -- this is a 3rd-party app with canned SQL.   The db is
intended to
be a sort of small datamart that pulls from our prod SAP.    I got paged
on this at 
2:00am last night, and was fighting it until 5:00am, so when I wrote the
email this 
morning before coffee, I can only plead that I wasn't thinking quite
clearly, and 
wrote the email prematurely.

I say that because a short while after I wrote the email I finally
realized that the 
data volume was all wrong.  The database was tuned for a SAP pull that
was has been about
between 200K-500K records per night for the past year.   Those 18
million row counts are an
anomaly.   After hunting down the team responsible for this app, I found
out that
instead of sending us the normal daily processing history, last night's
SAP pull sent 
us all data going back to Feb 2004.   Turns out the POC who asked for
that amount of data, 
1) didn't inform us in advance, and 2) wasn't sure how much data was
going to be sent, 
or the consequences.      

I see from the replies, however, some areas in where I'm deficient, or
can improve in,
so for that, I'm grateful.


Jeff T.

 

-----Original Message-----
From: Wolfgang Breitling [mailto:breitliw@xxxxxxxxxxxxx] 
Sent: Thursday, January 19, 2006 10:54 AM
To: Thomas Jeff
Cc: oracle-l@xxxxxxxxxxxxx
Subject: Re: SQL Tuning -- large, long-running outer join query

That query has no predicates whatsoever, apart from the join predicates
and those are outer joins. What that means is that Oracle has to read
every row of each of the five tables - at least once.
Like Mladen, I wonder what the original business question is which this
query is supposed to answer. I have the funny feeling that the
developer(s) threw in the outer join "just in case" for good measure,
especially on the last two outer joins.
For the aql ASIS, I would try to replace the two "nested loop outer" 
with hash joins as well with full scans of IA_SALES_ORDLNS and
OD_SALES_ORDLNS. And make sure you have the highest
db_file_multi_block_read_count your OS can handle and potentially use
parallel slaves.

Thomas Jeff wrote:
> We have this purchased app that is generating the following query.
It
> runs
> for 8 hours then crashes due to ORA-1555.  I'm reluctant to tinker 
> with the undo settings or increase the undo tablespace size, as it 
> appears from longops that this thing would likely run for another 7-8 
> hours before
> completing.   I'm at a loss on how to tune this -- I can't touch the
> physical structure, and I don't know enough to see any hints that
could
> possibly help.   
> 
> Any ideas?
> 
> 
> SELET (boatload of columns)
> FROM 
>  TS_SALES_ORDLNS,    /*    465,175 rows, 20,930 blocks    */
>  OD_SALES_ORDLNS,    /* 18,733,125 rows, 1,247,303 blocks */
>  IA_SALES_ORDLNS,    /* 18,631,135 rows, 603,410 blocks   */
>  TS_SALES_ORDLNS_HD, /*    474,465 rows, 25,819 blocks    */
>  TS_SALES_ORDLNS_LN  /*    472,645 rows, 14,043 blocks    */
> WHERE
>   TS_SALES_ORDLNS.SALES_ORDER_NUM = 
> TS_SALES_ORDLNS_HD.SALES_ORD_NUM(+)
> AND TS_SALES_ORDLNS.SALES_ORDER_ITEM =
> TS_SALES_ORDLNS_HD.SALES_ORD_ITEM(+)
> AND TS_SALES_ORDLNS.SALES_ORDER_NUM =
> TS_SALES_ORDLNS_LN.SALES_ORD_NUM(+)
> AND TS_SALES_ORDLNS.SALES_ORDER_ITEM =
> TS_SALES_ORDLNS_LN.SALES_ORD_ITEM(+)
> AND TS_SALES_ORDLNS.KEY_ID = OD_SALES_ORDLNS.KEY_ID(+) AND 
> TS_SALES_ORDLNS.KEY_ID =IA_SALES_ORDLNS.KEY_ID(+);
>  

--
Regards

Wolfgang Breitling
Centrex Consulting Corporation
www.centrexcc.com
--
//www.freelists.org/webpage/oracle-l


Other related posts: