To add to Wolfgang's note.
I would also eliminate command_type = 47 (pl/sql) from v$sql; and executions = 0 (the plan has probably been flushed already).
Tuning for large systems might be an interesting exercise. But choosing predicates to eliminate rows from v$sql and then using a nested loop might be the best option.
Regards
Jonathan Lewis http://www.oracle.com/technology/community/oracle_ace/ace1.html#lewis
The Co-operative Oracle Users' FAQ http://www.jlcomp.demon.co.uk/faq/ind_faq.html
Cost Based Oracle: Fundamentals http://www.jlcomp.demon.co.uk/cbo_book/ind_book.html
Date: Wed, 19 Apr 2006 08:59:45 -0500 From: "Ethan Post" <post.ethan@xxxxxxxxx> Subject: SQL Cost Compared To Elapsed Time
I posted this a while back with a lot more text and I don't think it ever posted, perhaps I was hitting some size limit. So here is the really short version. It occurred to me that a good way to find SQL which Oracle may be coming up with the wrong plan for is to compare the cost to the elapsed time. If the elapsed time per unit of cost is much higher than normal then Oracle might be using the wrong plan. The query below was my attempt to locate such SQL. Has anyone ever tried this? -- Tested on 9ir2
col address format a30 col cost format 99990.999 col elap_sec_per_cost format 99990.999
select a.address, a. cost, round(b.elap_time_per_exe/100000/a.cost ,3) elap_sec_per_cost from (select address,sum(cost) cost from v$sql_plan where cost is not null group by address) a, (select address, decode(executions,0,0,elapsed_time/executions) elap_time_per_exe from v$sqlarea) b where a.address=b.address order by 3 desc;
-- //www.freelists.org/webpage/oracle-l