Deleting 2.3 million rows takes time especially in 65 million rows table. The way I might handle this is load a global temporary table with the values returned by the subquery and "create table as select " a new table with the data you want add the index and them swap the names. If that is not an option try loading the values into a temp table and then doing the delete with a not in and add an index to the runseq field. Ken _____ From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx] On Behalf Of Don Doo Sent: Thursday, December 21, 2006 12:47 PM To: oracle-l@xxxxxxxxxxxxx Subject: Delete Performance Issue Hi, We are facing a serious performance issue. This is a delete statement and it takes 4 hours to delete 2.3 million rows from a 65 million row table. The c_trace table has only one index (the primary key) The table is not partitioned (we don't have the budget to pay for partition option) Other tables have less than 5000 rows. The sub-query returns 3 to 6 rows depending on the values Query delete from c_trace where targetperiod= 200612 and RUNSEQ in (select RunSeq from C_Run pr, C_Summary ss, C_Stage st where pr.RunSeq = ss.RunSeq and ss.stageType = st.stageType and st.name = 'load' and pr.period = 361 and ss.Active = 'yes') The V$session_longops shows 1 select OPNAME||' '||MESSAGE||' '||ELAPSED_SECONDS from v$session_longops 2* where sql_hash_value=2467621466 SQL> / OPNAME||''||MESSAGE||''||ELA ---------------------------------------------------------------------------- ------------ Hash Join Hash Join: : 6592 out of 6592 Blocks done 13688 Hash Join Hash Join: : 6272 out of 6272 Blocks done 12753 Hash Join Hash Join: : 6272 out of 6272 Blocks done 13594 Hash Join Hash Join: : 7488 out of 7488 Blocks done 14050 Looks like it takes 14050 seconds to complete the hash join which matches the time taken to complete the delete. select HASH_VALUE,CPU_TIME,elapsed_time/(1000000*60), fetches,disk_reads, BUFFER_GETS,ROWS_PROCESsed from v$sql where hash_value = 2467621466 HASH_VALUE CPU_TIME FETCHES DISK_READS BUFFER_GETS ROWS_PROCESSED ---------- ---------- ------------------------- ---------- ---------- ----------- -------------- 2467621466 193010000 0 1265770 13820713 2325397 Oracle version 9.2.0.4 Hash_area_size 8 MB Sort_area_size 4 MB Statistics are current.. We are using ASSM for these tables. I would really appreciate any ideas to improve this statement. We cannot do a event trace here until mid of January next year because we are behind the SLA and don't want to make it slower. Regards, Don