This is just an UPDATE, no LOB involved at all.
Bug you mentioned 30098251 is fixed in this 19.6 version.
Sent from Mail<https://go.microsoft.com/fwlink/?LinkId=550986> for Windows 10
From: Noveljic Nenad<mailto:nenad.noveljic@xxxxxxxxxxxx>
Sent: 15 July 2021 12:16
Subject: RE: KTSJ / Wnnn
Space bg slaves reclaim the space freed up by deleted LOBs. You could check if
your code massively deletes LOBs.
_max_spacebg_slaves default 1000 is excessive and could exhaust the IO
We set this parameter to 10 on all our databases a while ago.
Another reason to set the limit: I’ve noticed many large PGA allocations from
space bg processes, which don’t go away until you kill the processes.
Might be due to the following bug:
Bug 30098251 : WNNN PROCCESSES CREATE AN EXCESSIVE NUMBER OF OPEN CURSORS
From: oracle-l-bounce@xxxxxxxxxxxxx <oracle-l-bounce@xxxxxxxxxxxxx> On Behalf
Of Dominic Brooks
Sent: Donnerstag, 15. Juli 2021 13:02
To: ORACLE-L <oracle-l@xxxxxxxxxxxxx>
Subject: KTSJ / Wnnn
I was observing a foreground process yesterday which was running a series of
batched updates from Java in a single thread and was running very slowly.
Each element in the batch was updating a single row via a unique scan.
The execution time of this feed was reported as having tripled since moving to
Performance was atrocious. For example, from AWR over a period of 15-16 hours,
an average size batch of a couple of hundred elements was averaging anywhere
between 8 and over 100 seconds per execution per hour, the vast majority of
time in cluster related waits. Averages hide a whole bunch of detail of course
but a useful indicator.
I was observing from GV$SESSION and GV$ASH and the source of the cluster waits
seems to be related to KTSJ slave activity and there was strong correlation of
the “two” (java update plus multiple active KTSJ slaves) working on the same
datafile/blocks – series of the two doing gc buffer busy release, gc buffer
busy acquire, gc current block busy with the occasional cell single block
physical read. Blocking session information on some of the gc waits
occasionally pointing at the other (update blocked by KTSJ or vice versa)
Reading all the responses oracle-l thread from May 2020 on KTSJ was the best
source of information I could find:
And a couple of bug references leading from there, not all of which relevant to
my version (19.6) but giving indications of what might be going on:
* blocks are not marked as free in assm after delete - 12.2 and later (Doc
* performance degradation by w00 processes after applying july 2020 dbru
(Doc ID 32075777.8) superseded by
* force full repair enabled by fix control and populate repair list even if
_assm_segment_repair_bg=false (Doc ID 32234161.8)
With mention of the parameter _assm_segment_repair_bg.
Per the explanations in the oracle-l thread, seems to be foreground session
doing something which then prompts background session to check/fix the ASSM
information. But in my case, this fixing is causing significant contention back
to the foreground session.
I ran snapper on some of the KTSJ slaves and of the ASSM fix related stats,
ASSM bg: slave fix state was consistently around 5000 in a 5 second period.
That is not a statistic I have any context to judge the value of.
This is a monthly feed so doesn’t happen every day but when it does it sits in
a critical path. It’s finished now so there’s not a lot I can look at now if
it’s not in ASH. Obviously a next step is to try to reproduce this in a test
I just wondered whether anyone had done any further investigation into this
for Windows 10
Please consider the environment before printing this e-mail.
Bitte denken Sie an die Umwelt, bevor Sie dieses E-Mail drucken.
This message is intended only for the individual named. It may contain
confidential or privileged information. If you are not the named addressee you
should in particular not disseminate, distribute, modify or copy this e-mail.
Please notify the sender immediately by e-mail, if you have received this
message by mistake and delete it from your system.
Without prejudice to any contractual agreements between you and us which shall
prevail in any case, we take it as your authorization to correspond with you by
e-mail if you send us messages by e-mail. However, we reserve the right not to
execute orders and instructions transmitted by e-mail at any time and without
E-mail transmission may not be secure or error-free as information could be
intercepted, corrupted, lost, destroyed, arrive late or incomplete. Also
processing of incoming e-mails cannot be guaranteed. All liability of Vontobel
Holding Ltd. and any of its affiliates (hereinafter collectively referred to as
"Vontobel Group") for any damages resulting from e-mail use is excluded. You
are advised that urgent and time sensitive messages should not be sent by
e-mail and if verification is required please request a printed version.
Please note that all e-mail communications to and from the Vontobel Group are
subject to electronic storage and review by Vontobel Group. Unless stated to
the contrary and without prejudice to any contractual agreements between you
and Vontobel Group which shall prevail in any case, e-mail-communication is for
informational purposes only and is not intended as an offer or solicitation for
the purchase or sale of any financial instrument or as an official confirmation
of any transaction.
The legal basis for the processing of your personal data is the legitimate
interest to develop a commercial relationship with you, as well as your consent
to forward you commercial communications. You can exercise, at any time and
under the terms established under current regulation, your rights. If you
prefer not to receive any further communications, please contact your client
relationship manager if you are a client of Vontobel Group or notify the
sender. Please note for an exact reference to the affected group entity the
corporate e-mail signature. For further information about data privacy at
Vontobel Group please consult