RE: db file sequential read wait event and Insert performance

  • From: "Mark W. Farnham" <mwf@xxxxxxxx>
  • To: <Harvinder.Singh@xxxxxxxxxxxxx>, "'Finn Jorgensen'" <finn.oracledba@xxxxxxxxx>
  • Date: Thu, 28 Feb 2008 15:48:39 -0500

 

I'm not sure exactly what you mean by having 1 batch and yet it inserts into
15 partitions on one node and 15 partitions on the other node. Possibly this
means you have a supervisor reading the batch and directing the inserts to
the appropriate one of 30 children. Anyway, however you're doing that, as a
diagnostic and possibly part of the solution I suggest configuring your
batch so you do all the inserts on the one node and its fifteen assigned
insert partitions first and then the other. You've already diagnosed that 25
minutes of the time is reading and coordinating insert driven changes into
the indexes. You've noted that most of the read wait is on the two 3 column
indexes, but you didn't mention whether they were local indexes. If they are
global indexes, even though you've separated the table partition activity by
node, you'll still have to coordinate the tree where it is not directly
corresponding to the partitioning. If such global index coordination is
gating your throughput, you'll probably see gc_ stats drop and your overall
throughput increase even though you're serializing the batch into two
pieces.

 

If you have any control over the order of the batch, you might try ordering
it by the three column index that is dominantly used for subsequent read
queries (or the longer average key length one if neither is dominant on
read). That would at least tend to cache the bits of the tree you'll be
needing to modify to do the index part of the insert for one of the indexes.

 

I'd also be curious what the timing was for completely serializing the 30
partitions. That might tease out some additional thoughts about how to make
this as fast as possible.

 

Often when partitioning is by a date column the new data (inserts) are
skewed toward the younger partitions - is that the case for you?

 

Good luck.

  _____  

From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx]
On Behalf Of Harvinder Singh
Sent: Thursday, February 28, 2008 2:30 PM
To: Finn Jorgensen
Cc: oracle-l@xxxxxxxxxxxxx
Subject: RE: db file sequential read wait event and Insert performance

 

Indexes are used by other part of application so we can't drop them from
this testing. Is there any options that I can try to reduce these waits.

(Anyhow if we drop these indexes insert of 12m ROWS only takes 5 min with
250M rows already in table)

 

  _____  

<snip>

We are doing a benchmark testing of the application and currently focus on
area where we insert about 12M rows in 1 table in 1 batch and commit every
50k rows, Inserts are done by application using OCI calls. When we started
and tables were empty the batch was taking about 6:30 min and now with
tables having 200M rows the batch is taking about 30 min (5 times more). 

Following is the configuration

Oracle 10.2.0.3 <http://10.2.0.3/> , 2 node RAC cluster on Linux AS. Each
node has 8 CPU's and 32GB RAM and db_buffers are allocated 12GB and
Shared_pool to 3 GB on both the machine.

We are using ASM (stripe size 1mb default), ASSM, and local extent
management.

 

Table has 22 columns and 4 indexes, 2 column primary key, 2 (3 column
indexes) and 1 (1 column index). Table is range partitioned on date column
and currently has 30 partitions and all the inserts are going to all the 30
partitions, we already done all the filtering at application layer so that
there is very less internodes communication and both the RAC nodes are
inserting into 15 separate partitions.

I checked the wait events and db file sequential read is now consuming about
70% of total time and most of the time is spent on both the 3 column indexes
of the table. What options can we try or what should be the sequence of
steps to follow to reduce these waits and increase the insert performance.

 

Thanks

--Harvinder

 

 

 

 

Other related posts: