RE: Excessive transaction tables consistent reads - undo records applied in 11.2.0.3

  • From: "Mark W. Farnham" <mwf@xxxxxxxx>
  • To: <Laimutis.Nedzinskas@xxxxxx>, <oracle-l@xxxxxxxxxxxxx>
  • Date: Wed, 3 Oct 2012 09:34:32 -0400

I don't know of a bug, but *likely* you may avoid the error if you can get
the two jobs to modify blocks in the same order.

If that is already true, are you getting adaptive direct read and/or many
chained and/or migrated blocks?

As a work-around, pending root cause elucidation, if you can have each job
post at start that it is running to a sentinel table (and commit that bit,
which after success you delete and commit), then you can *see* whether:

1) If other job is running, do a simple full table scan on the table being
modified before you start modifications (no joins or loops, just blast
through it in a way that forces an fts). If this does not help, then 2) Can
you wait until the other job is simply not running? The time to complete the
two in sequence *might well be* in less total elapsed time than competing,
even without some bug.

Now all that was off the top of my head. On reflection I'd *test* whether
running the two jobs definitely not at the same time when they attempt to
collide is faster anyway, *and* whether a gratuitous simple full table scan
before the update transaction usually wins anyway. Undo is a marvelous
invention to facilitate multi-user updates and sort out the proper results.
But you want to avoid needing it whenever you can do something cheaper. Then
the sentinel thing should both avoid the problem and perform better. And of
course without transaction situation details we can't see whether
(paraphrase) "Your fastest update is an insert" aka partition exchange (well
explained by Tim Gorman) can dovetail with your existing processing pattern.

Good luck,

mwf

-----Original Message-----
From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx]
On Behalf Of Laimutis.Nedzinskas@xxxxxx
Sent: Wednesday, October 03, 2012 2:44 AM
To: oracle-l@xxxxxxxxxxxxx
Subject: Excessive transaction tables consistent reads - undo records
applied in 11.2.0.3


Hi

Did anyone came accross excessive "tables consistent reads - undo records"
in 11.2.0.3 ?

In our case it looks like this:

- select * from table, full scan

- table segment is about 400MB, one would not expect more than a few GB of
physical IO for a full scan.

- physical IO reaches terabytes (of UNDO segments),  "transaction tables
consistent reads - undo records applied", "consistent gets" roughly
corresponds to IO (terabytes), the other undo statistics (like "data blocks
consistent reads - undo records applied" ) have moderate values.

- Developers confirmed the execessive reads happen when two batch jobs
modifying the same table cross cross each other. But - the funny point is
that one of the jobs finishes in tens of minutes. The other job is left
compromised: it crawls slowly for 10 or more hours just to full scan 400MB
table doing the excessive undo IO. Nothing else is accomplished in between:
just bulk fetch by 1000 records into PL/SQL (var)array.
As far as we saw the process is not spinning: select * from table does fetch
records but at extremely slow pace.

Oracle support lists a few bugs as fas as excessive "transaction tables
consistent reads - undo records applied" are concerned but I have not find
one for 11.2.0.3

On the other hand I am wandering if this is exactly a bug: can it be that
"transaction tables consistent reads - undo records applied" goes over the
roof if one session does a select while the other sessions manage to modify
the same table segment and expire transactional tables in undo segments ?

Brgds, Laimis N





----------------------------------------------------------------------------
-----

Please consider the environment before printing this e-mail

--
//www.freelists.org/webpage/oracle-l


--
//www.freelists.org/webpage/oracle-l


Other related posts: