Looks like the Cartesian merge join was the real killer - a huge amount of CPU to build a massive data set that you eliminated (relatively) quickly by passing it through some hash tables. Then the ranking window sort required you to generate all the data before returning the first row. Oracle may have eliminated a join predicate through transitive closure to produce that Cartesian - but then it was estimating something like 750 rows joined to 27 rows - which doesn't look like a lot of data - and getting 19M * 544. (The nvl() on the date predicate for the small table is why the 27 prediction appears - 5% of the 544 actual rows). (I'm doing a webinar about selectivity calculations on 10th Sept - check the blog - http://jonathanlewis.wordpress.com/public-appearances/ ) Regards Joathan Lewis ________________________________ From: Christopher.Taylor2@xxxxxxxxxxxx [Christopher.Taylor2@xxxxxxxxxxxx] Sent: 08 August 2013 20:47 To: Jonathan Lewis Cc: rjamya@xxxxxxxxx; oracle-l@xxxxxxxxxxxxx Subject: RE: Perhaps a _REALLY_ dumb question from an experienced dba? You're right, I should have provided the link: SQL Processing Concepts: http://docs.oracle.com/cd/A57673_01/DOC/server/doc/A48506/sqlconce.htm Thank you for the detailed (and relatively easy to follow) explanation. And I did read Tanel's post about using Dtrace and how it programmatically flows down through the plan but I wasn't sure if that was on fetches or the executions (admittedly I didn't read it as closely as I could have). After doing some testing, by disabling join elimination, complex_view_merging, sort_merge_join, and turning off always_semi_join I could get the execution time down to about 8seconds. The original plan did have a cartesian join in it which went away after working through the above parameters. Something was definitely wonky with the original plan (and poorly designed SQL). Here's the plan that was bad (keeping in mind most of these indexes were created before I got here minus the IDX0 ones ;-p) -- //www.freelists.org/webpage/oracle-l