yep. fix that before looking at the other things if the problem persists.
-----Original Message-----
From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx] On ;
Behalf Of Stefan Koehler
Sent: Monday, October 24, 2022 6:22 AM
To: Pap; Oracle L
Subject: Re: Question on concurrency wait time
Hello Pap,
it is not about the BTC file size or DB buffer cache, it is about the BCT
buffer in SGA.
How this all can be related? Let's take line 2 from your DASH wait chain script
output for example.
Session (USER1:(PRG2) block change tracking buffer space) is waiting on a free
BTC buffer and holds a block (in buffer cache) in an incompatible mode while
waiting on a free BTC buffer. Now the other 5.4 sessions (USER1:(PRG2) buffer
busy waits [data block]) in avg also try to modify this block (this can be an
index block) but have to wait until the blocking session (which waits on free
BTC buffer) can go on and change the block mode to a compatible one.
So overall the root cause of the issue is "block change tracking buffer space"
and "buffer busy waits" is just a consequence of it.
Now you need to figure out:
1) Do you have an I/O latency issue with the BTC file?
2) Is your BTC buffer big enough to handle the database load (and also in
regard to I/O latency of BTC file)?
... and then fix the I/O latency problem of the BTC file or/and increase the
BCT buffer.
Best Regards
Stefan Koehler
Independent Oracle performance consultant and researcher
Website: http://www.soocs.de
Twitter: @OracleSK
Pap <oracle.developer35@xxxxxxxxx> hat am 24.10.2022 11:59 CEST geschrieben:--
Thank You So much Stefan.
I am not sure why all the responses I sent back earlier bounced back from the
list. I had replied earlier, suspecting maybe the smaller size of the buffer
cache is causing this, because the rate of inserts happen throughout the day
is almost the same , but we see the concurrency during the time when there is
heavy activity happening on the database from another app. But now as you
rightly pointed out and also its showing in the ASH stats the BCT Buffer
space wait seems to be the starting point. And then we may have to increase
the size of the BCT buffer. Currently we have database size ~27TB and the BCT
file size in v$block_change_tracking showing as ~2GB.
But again how can this be related? Its understood that during higher
activity(say high DML's) from another application which may not be related to
this object/index, but then that can cause this INSERT query(along with other
DMLS) response to go slow itself and the top wait event it should log against
that insert query as "BCT buffer space". However we are seeing the top wait
event against this INSERT query as concurrency/buffer busy waits and the
current_obj# pointing to the index partition? As my understanding was
concurrency/buffer busy wait on index block can only happen during the race
scenario, i.e. if the inserts are too heavy/concurrent and are all going to
change one specific block. Please correct me if my understanding is wrong.
And yes, we were initially suspecting if it's the size of the buffer cache
which matters here and has to be bigger to cater the load. Then, we tried to
see the sga advisory , if it's pointing to the same fact. This database has a
db_cache_size of 6GB with sga_max_size ~35GB and sga_target set as 0. But not
sure why , I don't see any records populated in dba_hist_sga_advice. But from
the rows in dba_hist_db_cache_advice it's showing that doubling the db cache
i.e. increasing it by a factor of -2 will just help in reducing the physical
reads by ~15 to ~17%. Not sure if these figures are reliable or not.