RE: Initialization parameter transactions_per_rollback_segment, can you set this?

  • From: "Arnold, Sandra" <ArnoldS@xxxxxxxx>
  • To: "Jared Still" <jkstill@xxxxxxxxx>
  • Date: Tue, 16 Aug 2005 12:02:29 -0400

Some of these loads take 15 hours or more to process.  It is a lot of
data with at least one CLOB field for one of the tables.  There are
times that a record or records can be inserted into three different
tables.  If the process fails after 14.5 hours have lapsed and I have to
restart at the very start of the process than that is 14.5 hours that I
have wasted.  I am estimating that it is going to take me close to 3 to
4 weeks to load the entire set of data.  That means I cannot waste any
of the time that I have.  After a subset of data is loaded then I have
to sync 5 intermedia indexes with two of those indexes being User
Datastore indexes.  One of these indexes can take at least two days to
sync for one subset of data.  It is indexing fulltext documents that are
stored outside of the database as well as the metadata that is stored in
the database.

 

I can decrease the subset of data but I want to exhaust all of my other
options first.

 

Sandra

  _____  

From: Jared Still [mailto:jkstill@xxxxxxxxx] 
Sent: Tuesday, August 16, 2005 11:57 AM
To: Arnold, Sandra
Cc: sol beach; Oracle-L
Subject: Re: Initialization parameter transactions_per_rollback_segment,
can you set this?

 

I would guess (the testing is up to you :) that the time saved during a
restart
of a job by committing each record does not come close to the time
( and frustration ) you would save by committing the whole batch once.


-- 
Jared Still
Certifiable Oracle DBA and Part Time Perl Evangelist

On 8/16/05, Arnold, Sandra <ArnoldS@xxxxxxxx> wrote:

It needs to commit after each record so that it will be easy to recover
at the next record to be loaded if something happens during the load.
For instance, if you are loading 100,000 records you do not need to load
all 1000,000 again if a problem occurs during the load.  You can pick
back up at the next record to be loaded.  The process logs each unique
identifier that it loads so that you know where it was in the process
when it failed. 




Other related posts: