RE: linux memory: limiting filesystem caching

  • From: "Marquez, Chris" <cmarquez@xxxxxxxxxxxxxxxx>
  • To: <kutrovsky.oracle@xxxxxxxxx>, <mark.teehan@xxxxxxxx>, <zhuchao@xxxxxxxxx>
  • Date: Wed, 13 Jul 2005 13:35:03 -0400

Hmmm?

http://oss.oracle.com/projects/ocfs/dist/documentation/RHAS_best_practic
es.html
...
2. Configuring Linux For OCFS
...

Due to improvements in Virtual Memory page cache performance, a minimum
kernel errata of e.24 or higher is strongly recommended 
...
If using kernel errata e.12 or higher, the default kernel page cache
settings should be used. 
...
Only if using kernel errata less than e.12, manual configuration of
Virtual Memory page cache is likely to be necessary to avoid excessive
page cache retention. Therefore, add the following parameter to
/etc/sysctl.conf file, then reboot for changes to take effect: 
[/etc/sysctl.conf]vm.pagecache = 10 20 30


Comments?

PS I have Oracle9205-RAC, RHEL3ES, OCFS, kernel2.4.21-4, 8GB RAM and use
RMAN for 100GB DB backup (to EXT3 disk).
I am not aware if we "swap" during backups...we have no know issues?

Chris Marquez
Oracle DBA
 

-----Original Message-----
From: oracle-l-bounce@xxxxxxxxxxxxx
[mailto:oracle-l-bounce@xxxxxxxxxxxxx] On Behalf Of Christo Kutrovsky
Sent: Wednesday, July 13, 2005 9:30 AM
To: mark.teehan@xxxxxxxx
Cc: oracle-l@xxxxxxxxxxxxx
Subject: Re: linux memory: limiting filesystem caching

Hello Teehan,

You don't mention which RH version you have, but assuming 3.0 advanced
server.

as zhu chao mentioned, /proc/sys/vm/pagecache is the parameter you need.

My recommendation is to give at most 50% to file caching.

vi /etc/sysctl.conf 

and add:
vm.pagecache=10 50 50

then run "sysctl -p" to apply. That way on next boot those will be in
effect.

You can monitor with "vmstat 2" in another session if the memory for
"cache" drops and "free" gets high.

In addition, on LINUX you should always use hugepages (hugetlbpool) for
Oracle. That way Oracle's memory is locked in physical RAM and is not
managed (almost invisible) by the linux memory manager, thus reducing
memory management cost.

Also the benefits of the big pages (chunks of 2mb) will reduce your
overal memory usage, especially if you have a lot sessions. Some simple
math:

SGA of 2 gb , in 4 Kb pages = 524288 pages * 8 bytes per pointer (i
think) = 4 Mb per process for the page pointers. If you have 500
sessions then that's 500 * 4 mb = 2 Gb of memory to manage 2 gb of
memory.

Compared to SGA of 2 gb in 2 Mb pages = 1024 pages * 8 bytes = 8 Kb per
process. For the same 500 sessions you will use 4 Mb of memory to manage
2 gb of memory. A significant improvment.

Also the CPU has only so many entries in the virtual to physical memory
map cache, thus having that many less pages will improve significantly
the hit ratio of your virtual to physical mapings.

So simply by using hugepages you:
- reduce memory for PTE pointers by a factor of 512 (2 gb for 500
sessions)
- lock Oracle's SGA in physical memory
- reduce the memory management costs for the linux kernel
- improve the CPU cache for virtual to physical mapings
- reduce the amount of memory that you have to touch overal

I hope this helps.


--
Christo Kutrovsky
Database/System Administrator
The Pythian Group

On 7/13/05, Teehan, Mark <mark.teehan@xxxxxxxx> wrote:
> Hi all
> I have several redhat blade clusters running 10.1.0.4 RAC on
2.4.9-e.43enterprise. All database storage is OCFS, with ext3 for
backups, home dirs etc. The servers have 12GB of RAM, of which about 2GB
is allocated to the database, which is fine. Linux, in its wisdom, uses
all free memory (10GB in this case) for filesystem caching for the non
OCFS filesystems (since OCFS uses directIO); so every night when I do a
backup it swallows up all available memory and foolishly sends itself
into a swapping frenzy; and afterwards it sometimes cannot allocate
enough free mem for background processes. This seems to be worse on e43;
I was on e27 until recently. Does anyone know how to control filesystem
block caching? Or how to get it to de-cache some? For instance, I have
noticed that gziping a file, then ctrl-C'ing it can free up  a chunk of
RAM, I assume it de-caches the original uncompressed file. But its not
enough!
> 
> Rgds
> Mark
> 
> ======================================================================
> ======== Please access the attached hyperlink for an important 
> electronic communications disclaimer:
> 
> http://www.csfb.com/legal_terms/disclaimer_external_email.shtml
> 
> ======================================================================
> ========
> 
> --
> //www.freelists.org/webpage/oracle-l
> 


--
Christo Kutrovsky
Database/System Administrator
The Pythian Group
--
//www.freelists.org/webpage/oracle-l

--
//www.freelists.org/webpage/oracle-l

Other related posts: