Re: PeopleSoft query

  • From: Stephane Faroult <sfaroult@xxxxxxxxxxxx>
  • To: Karl Arao <karlarao@xxxxxxxxx>
  • Date: Mon, 13 Jul 2009 10:43:03 +0200

Karl,

    I understand queries better than execution plans :-). If your query
is massive, and it looks like it is, then scanning tables, however big
they are, makes sense (if you want to ensure scalability, then you
should partition them, but not try to use indexes). But when you scan
tables, then accessing them AGAIN through an index kills you, especially
if you do it multiple times, because in fact the indexed access has no
added value since it takes you to rows you have already visited by
scanning. I hope it makes sense? Therefore the sound approach is to take
a brutal one, scan everything remorselessly, and while doing so use an
analytical function (alright, cost isn't 0, it has to sort in the
background but by and large it does it efficiently and here indexes can
be used) so that you can wrap the result in a query and by testing the
result of the analytical function know what you keep and what you
discard. Subqueries that mutually depend on the result of each other as
you have here mean slow death, as you can see.

SF

Karl Arao wrote:
> For additional info, this is the explain plan from DBMS_XPLAN.DISPLAY_CURSOR
>
>
> -- IF THE SQL GOES HASH JOIN THE RUNTIME FOR THIS SQL IS ESTIMATED 18HOURS
>
>   



--
//www.freelists.org/webpage/oracle-l


Other related posts: