Running Oracle 10.2.0.3 on RHEL4 x86_64. We have a range/hash partitioned table, around 250M rows, that we are seeing some recent performance slowdown on. This slowdown is occuring during a large batch UPDATE job. The routine is to select a large number of records from one table, and update corresponding records in this table based on that. Each job performs anywhere from 8-12 million updates. The table itself is partitioned monthly by a DATE field, I'll call it foo_date. It is hash-subpartitioned by a VARCHAR2 field called foo_id (despite the name, it is not a unique ID, but it is a pretty selective value). The kicker: the primary key for this table is a GLOBAL index on (foo_id, event_id, foo_date). The UPDATE statement references all 3 of these fields and the primary key index is used in the query plan (unique scan). We're looking a moving this to a local index, or at least testing it. Recently, however, time to finish the job has doubled and sometimes tripled. EM dbconsole shows a lot of db file sequential reads for this UPDATE statement. During the course of the batch job, the UPDATEs will affect rows in the partitions for the current cycle and 1 month prior. I'm wondering if such a large volume of data would make a global index rather cumbersome to search through. -- Don Seiler http://seilerwerks.wordpress.com ultimate: http://www.mufc.us -- //www.freelists.org/webpage/oracle-l