Re: linux memory: limiting filesystem caching

  • From: Christo Kutrovsky <kutrovsky.oracle@xxxxxxxxx>
  • To: mark.teehan@xxxxxxxx
  • Date: Wed, 13 Jul 2005 16:29:42 +0300

Hello Teehan,

You don't mention which RH version you have, but assuming 3.0 advanced server.

as zhu chao mentioned, /proc/sys/vm/pagecache is the parameter you need.

My recommendation is to give at most 50% to file caching.

vi /etc/sysctl.conf 

and add:
vm.pagecache=10 50 50

then run "sysctl -p" to apply. That way on next boot those will be in effect.

You can monitor with "vmstat 2" in another session if the memory for
"cache" drops and "free" gets high.

In addition, on LINUX you should always use hugepages (hugetlbpool)
for Oracle. That way Oracle's memory is locked in physical RAM and is
not managed (almost invisible) by the linux memory manager, thus
reducing memory management cost.

Also the benefits of the big pages (chunks of 2mb) will reduce your
overal memory usage, especially if you have a lot sessions. Some
simple math:

SGA of 2 gb , in 4 Kb pages = 524288 pages * 8 bytes per pointer (i
think) = 4 Mb per process for the page pointers. If you have 500
sessions then that's 500 * 4 mb = 2 Gb of memory to manage 2 gb of
memory.

Compared to SGA of 2 gb in 2 Mb pages = 1024 pages * 8 bytes = 8 Kb
per process. For the same 500 sessions you will use 4 Mb of memory to
manage 2 gb of memory. A significant improvment.

Also the CPU has only so many entries in the virtual to physical
memory map cache, thus having that many less pages will improve
significantly the hit ratio of your virtual to physical mapings.

So simply by using hugepages you:
- reduce memory for PTE pointers by a factor of 512 (2 gb for 500 sessions)
- lock Oracle's SGA in physical memory
- reduce the memory management costs for the linux kernel
- improve the CPU cache for virtual to physical mapings
- reduce the amount of memory that you have to touch overal

I hope this helps.


-- 
Christo Kutrovsky
Database/System Administrator
The Pythian Group

On 7/13/05, Teehan, Mark <mark.teehan@xxxxxxxx> wrote:
> Hi all
> I have several redhat blade clusters running 10.1.0.4 RAC on 
> 2.4.9-e.43enterprise. All database storage is OCFS, with ext3 for backups, 
> home dirs etc. The servers have 12GB of RAM, of which about 2GB is allocated 
> to the database, which is fine. Linux, in its wisdom, uses all free memory 
> (10GB in this case) for filesystem caching for the non OCFS filesystems 
> (since OCFS uses directIO); so every night when I do a backup it swallows up 
> all available memory and foolishly sends itself into a swapping frenzy; and 
> afterwards it sometimes cannot allocate enough free mem for background 
> processes. This seems to be worse on e43; I was on e27 until recently. Does 
> anyone know how to control filesystem block caching? Or how to get it to 
> de-cache some? For instance, I have noticed that gziping a file, then 
> ctrl-C'ing it can free up  a chunk of RAM, I assume it de-caches the original 
> uncompressed file. But its not enough!
> 
> Rgds
> Mark
> 
> ==============================================================================
> Please access the attached hyperlink for an important electronic 
> communications disclaimer:
> 
> http://www.csfb.com/legal_terms/disclaimer_external_email.shtml
> 
> ==============================================================================
> 
> --
> //www.freelists.org/webpage/oracle-l
> 


-- 
Christo Kutrovsky
Database/System Administrator
The Pythian Group
--
//www.freelists.org/webpage/oracle-l

Other related posts: