It may be that you are running out of segments. Check max_map_count
parameter and make sure it's 131072 or larger. The default is 65536,
which can be too low for a busy database with many users. Even 262144 is
not too outlandish.
Regards
On 05/18/2018 12:54 PM, Chris Taylor wrote:
Also (if it matters) this is a JDBC connected program from ODI.
Chris
On Fri, May 18, 2018 at 11:53 AM, Chris Taylor <christopherdtaylor1994@xxxxxxxxx <mailto:christopherdtaylor1994@xxxxxxxxx>> wrote:
Env: 12.1.0.2 on Linux 7.4 64-bit
So, I've been all over Oracle support and Google-ing this morning
trying to determine a root cause for this:
Here's the error and its always 32GB (ulimit settings to follow).
Oracle doesn't have any support docs for KOLLRSZ.
The reason I'm wondering about ulimits kicking in is because its
always at 32GB. I'm betting its a memory leak but I'd like to
overcome this seeming 32GB barrier.
[TOC00001]
ORA-04030: out of process memory when trying to allocate 16328
bytes (koh-kghu sessi,kollrsz)
[TOC00001-END]
[TOC00002]
========= Dump for incident 931836 (ORA 4030) ========
[TOC00003]
----- Beginning of Customized Incident Dump(s) -----
=======================================
TOP 10 MEMORY USES FOR THIS PROCESS
---------------------------------------
*** 2018-05-18 12:33:30.917
*100% 32 GB, 2087947 chunks: "kollrsz "*
* koh-kghu sessi ds=0x7f8047e4f178 dsprt=0x7f804ae4cf60*
0% 9688 KB, 77 chunks: "kllcqas:kllsltba " SQL
QERHJ hash-joi ds=0x7f8048f2dc40 dsprt=0x7f804a5ab710
0% 4333 KB, 50 chunks: "permanent memory " SQL
kxs-heap-w ds=0x7f804a5ab710 dsprt=0x7f804ae4cf60
0% 2503 KB, 18 chunks: "QERHJ list array " SQL
QERHJ hash-joi ds=0x7f804a61dc30 dsprt=0x7f804a5ab710
0% 2432 KB, 7 chunks: "HT buckets " SQL
QERHJ hash-joi ds=0x7f804a61dc30 dsprt=0x7f804a5ab710
0% 2056 KB, 2 chunks: "kllcqgf:kllsltba "
klcliti:kghds ds=0x7f80491e9e10 dsprt=0x7f804f9a09e0
0% 1537 KB, 56 chunks: "QERHJ Bit vector " SQL
QERHJ hash-joi ds=0x7f8048f2dc40 dsprt=0x7f804a5ab710
0% 1381 KB, 66 chunks: "free memory " SQL
QERHJ hash-joi ds=0x7f804a61dc30 dsprt=0x7f804a5ab710
0% 1213 KB, 27 chunks: "free memory "
top call heap ds=0x7f804f9a1be0 dsprt=(nil)
0% 1046 KB, 73 chunks: "free memory "
top uga heap ds=0x7f804f9a1e00 dsprt=(nil)
I'm wondering if there's a parameter at the OS layer limiting the
session to 32GB
ulimit -a
-----------------------------------------
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 16506448
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 131072
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 32768
cpu time (seconds, -t) unlimited
max user processes (-u) 131072
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
PGA Settings are as follows on this Data Warehouse:
NAME TYPE VALUE
------------------------------------ ----------- ------------
pga_aggregate_limit big integer 128G
pga_aggregate_target big integer 8G