Re: Block dump - Uncommitted TXN - Help Urgent

  • From: BN <bnsarma@xxxxxxxxx>
  • To: mwf@xxxxxxxx
  • Date: Tue, 19 Sep 2006 12:06:30 -0400

On 9/19/06, Mark W. Farnham <mwf@xxxxxxxx> wrote:

HAVING many many extents is rarely a problem, even with dictionary managed tablespaces (with the notable exception of online drop, and pathological conditions such as having a huge number of extents each smaller than the multiblock read size on tables scanned frequently enough to be a problem and infrequently enough to age out of cache.)

Making extents frequently, on the other hand, could be your entire problem

If you create a table with columns like dba_extents plus an as_of_date
column, and track your dictionary based objects as they change in size
(perhaps once or twice a day is probably enough), you will be able to
quickly identify objects which have an inappropriate next size. Now
rebuilding those objects is usually a time waster unless they exhibit empty
front (ie. Many empty blocks at the beginning of a table or cluster that is
significantly scanned before the block that contains the first row or a
large honeycomb factor that is an artifact of non-recurring past behavior.)
But changing the next size on the identified quickly extending objects to
something that will cause an extent allocation about once a day (or less,
some folks shoot for once a quarter or less) should eliminate the
possibility that piling up on allocating extents is your problem.

Now while I usually advocate finding the actual individual problem and
solving that (see Hotsos, Oak table, etc.), it sounds to me as if in this
case you have a storm of obfuscation that can be quickly and easily
eliminated. Then, if the systemic noise was not your entire problem, you
should return to resolving the actual individual problems by wait analysis.

Rightsizing, Inc. used to maintain and sell a product called "extmon"
designed to track object growth as an aid to capacity planning. Since it was
clear text sql, unauthorized free distribution soon made sales small enough
to discontinue the product (it was only $500). You can probably build it
yourself in a few hours and make it fast in a bit longer. Or you can
probably find it, a derivative, or an independently created similar script
set laying around the net somewhere.

Good luck. Just please don't misconstrue this advice as suggesting you
should spend your life rebuilding objects.


*From:* oracle-l-bounce@xxxxxxxxxxxxx [mailto:
oracle-l-bounce@xxxxxxxxxxxxx] *On Behalf Of *BN
*Sent:* Tuesday, September 19, 2006 9:35 AM
*To:* David Sharples
*Cc:* Anjo Kolk; _oracle_L_list
*Subject:* Re: Block dump - Uncommitted TXN - Help Urgent

On 9/19/06, *David Sharples* <davidsharples@xxxxxxxxx> wrote:

why not just look at dba_blockers and dba_waiters - why are you doing
block dumps?

Find the blocking session and either kill it or call him and and ask him
to commit the changes

On 18/09/06, *BN* <bnsarma@xxxxxxxxx> wrote:

On 9/18/06, *Anjo Kolk* <anjo.kolk@xxxxxxxxxxx > wrote:

I am sorry to be such a pain, but may be explaining what the problem is and what you want to do, will help me and others on this list.


On 9/18/06, *BN* <bnsarma@xxxxxxxxx> wrote:



I have taken a Block dump,

Where do I look for missing commit?


Like I said I am able to identify this info from v$lock (request, lmode,
block) and link it
to v$session to get the object details.

Initially I was thining that it was a missing "COMMIT" some where in the
app, Later I saw a pattern.

The Blocker  (doing a TXN (DML) spanning many tables) was holding the lock
a little longer, 10 secs are more. Being a very busy OLTP , this was causing
others issues.

I wasn't sure where the Blocker loosing his time, Later in the evening , I
noticed that there were many ST enqueue locks (no, we are not using LMTS -
For all new tables I am pushing LMTS).  I am assuming that this could be an
issue. Bumped up the next extent size. The tables were large and extents
were in 1000s.

I am also monitoring v$session_wait. Its intermittent and fast. The users
complain that they are stuck and if they retry they are back to normal.

I have been sitting on this all day yesterday, took some Level 12 traces
and fixed  some expensive queries , not sure if they helped or not. Those
querues are running faster now.

Again back to monitoring today ...

Appreciate  your ides/thoughts ....
Regards & Thanks


I was thinking of INITRANS , is there a way to track that from v$lock,
I remember once Steve Adams  answering some body looking at v$lock  looking
at Request Column, not sure though

Regards & Thanks

Other related posts: