Hi Nenad,
Yep, interesting. As the failed allocation size was over 50kB, these
allocations can (also) use shared pool reserved area. The heapdump analysis
shows that there's plenty of large *R-free* chunks in sub-heap 1
sub-sub-heaps 0 and 3 (12.2 uses only 2 sub-sub-heaps/durations instead of
4).
Looks like the KTSL subheap allocations below are done from duration 0.
Total_size #Chunks Chunk_size, From_heap, Chunk_type,
Alloc_reason
---------- ------- ------------ ----------------- -----------------
-----------------
...
* 73586688 1392 52864 , sga heap(1,0), freeable, KTSL
subheap 20987008 397 52864 , sga heap(1,0), R-freeable,
KTSL subheap * 19408560 11390 1704 , sga heap(1,0),
freeable, kcbi io desc sl
16789760 20 839488 , sga heap(1,3), R-free,
12581760 6 2096960 , sga heap(1,0), R-free,
11036608 11 1003328 , sga heap(1,0), R-free,
How many shared pool sub-pools do you have? The heapdump analyzer output
indicates that just one, but confirming to be sure.
Also, can you post the output of:
1) SELECT * FROM v$shared_pool_reserved
2) @kghlu.sql
<https://github.com/tanelpoder/tpt-oracle/blob/master/kghlu.sql>
3) @ksmlru.sql
<https://github.com/tanelpoder/tpt-oracle/blob/master/ksmlru.sql>
The above commands/scripts are safe to run (unlike x$ksmsp queries) as they
don't hold shared pool latches for a long time. As long as you haven't
restarted the instance since the problem happened, "current" output should
be fine.
A little side note: Thank you, Tanel, for providing such a useful script!
In particular, I'm interested to know what might have triggered the
resizing decision when there was enough consecutive freeable memory to
fulfil the request. Further, is there some possibility to trace the
resizing decisions?