Memory management in Oracle 11g and limit of open file descriptors

  • From: Michael Elkin <melkin4u@xxxxxxxxx>
  • To: oracle-l <oracle-l@xxxxxxxxxxxxx>
  • Date: Thu, 12 Nov 2009 22:21:10 +0200

Dear list members,

Recently i have encountered a problem with system resources on my Linux (64
bit) machine with Oracle 11g. I have started 2 instances on the same server
with 1GB of MEMORY_TARGET each.
We have run some sort of stress test trying to open 200-300 application
connections to the database and suddenly got many errors "too many open
files".

As you know Oracle 11g uses a different mechanism for memory management
implementing shared memory on so called tmpfs file system or /dev/shm/.
In previous versions shared segment can be easily observed from ipcs -m
command but in Oracle 11g the situation is different , many small files of
4MB (MEMORY_TARGET under 1GB, and 16MB for MEMORY_TARGET above 1G) are
created under /dev/shm.
So i tried to understand how all this is related to "too many open files
error".
I have tried to examine lsof output and found that ~55000 files are opened.
After that i did a small experiment:
1. shutdown the database ; /dev/shm is empty
2. startup the database :
    found 250 files (makes sense 1GB of memory_traget / 4M =250 files)
    lsof gives me output 5000 open files , without any opened external
sessions ,which means that all files were opened by Oracle background
processes like pmon, dbwr, lgwr ... I have checked this one more time and
yes indeed i found that by default my instance is started with 20 background
processes.

The most interesting point that each Oracle process opens every file under
/dev/shm that has been created by Oracle, 20 processes * 250 files =5000 -
this is how i got 5000 open files without even opening a single connection
to Oracle.
3. After that i run a simple shell script that opens 200 sessions without
closing them. The number of open files immediately jumped to 55000.

From my first tests i can see that number of opened files by Oracle is
directly related to MEMORY_TARGET size of each instance * number of Oracle
processes. Taking into account that every OS has it's  own limits for
maximum number of possible open files new memory management approach looks
to me a little bit problematic.

I would like to know if someone has experiences the same and how you tried
to solve this problem.
Is there any option to increase the basic file size under /dev/shm for
example ?

Thank you.
-- 
Best Regards
Michael Elkin

Other related posts: