From RedHAT if you: # grep Hugepagesize /proc/meminfo Hugepagesize: 2048 kB Yes, You see KB, But In formula: # grep Hugepagesize /proc/meminfo|awk '{print $2}' 2048 So, huge (variable) = 2048 You should be 51539607552 / 2048 = ??? (shmall) That is redhat idea, However you should look at less shmall value should be, If you set less than that ... you can not start database Example: You plan use sga_target=51539607552 and PAGE_SIZE=4096 ... shmall >= 12582912 I think you should look at Oracle recommend. MEM=$(free|grep Mem|awk '{print$2}') PAGE_SIZE=$(getconf PAGE_SIZE) shmall=$MEM/$PAGE_SIZE Surachart Opun http://surachartopun.com On Thu, Jan 14, 2010 at 12:51 AM, Herring Dave - dherri < Dave.Herring@xxxxxxxxxx> wrote: > A little more info that I've uncovered. The value of 25165824 for shmall > comes from a formula listing shmall should be (shmmax / hugepagesize). > shmmax is set to 51539607552 (so the largest shared memory segment can be > 48 GB), hugepagesize is 2048 KB, so they calculation others have done is > 51539607552 / 2048 = 25165824. > > My problems with this are that shmmax is listed in bytes while hugepagesize > is in KB, so if using that formula it should be 51539607552 / 2097152. > Also, my understanding is that shmall should always be listed in terms of > the default server page size. So if huge pages are used, it should match > the total space of your huge pages but in terms of 4 KB pages, which means > really shmall should be 12582912. > > Is any of what I listed in the 2nd paragraph correct? Sorry if it seems > I'm over analyzing this but to request a particular value I feel I need to > know exactly why and how I'm coming with a value (and then fully documented > the method and reasons). > > Thx. > > Dave Herring | DBA, Acxiom Database Services > > 630-944-4762 office | 630-430-5988 cell | 630-944-4989 fax > 1501 Opus Pl | Downers Grove, IL, 60515 | U.S.A. | www.acxiom.com > Service Desk: 888-243-4566 > > > > -----Original Message----- > From: Herring Dave - dherri > Sent: Tuesday, January 12, 2010 9:54 AM > To: Oracle L > Subject: shmall on Linux recommendation > > Folks, I've inherited a 4-node RAC plus DG environment that has a rather > curious shmall setting that I'd appreciate your thoughts on. I've done my > best to understand it's current setting but those that made the decision are > no longer with the company and/or client. > > The system is running 10.2.0.2 on RHEL 4. Like I said, it's a 4-node RAC, > 64 GB RAM, with the same for a physical standby. Currently the SGA of each > instance is 17 GB. > > The install guide gives a minimum value of 2097152 (8 GB) while the current > value is 25165824 (96 GB). To me it seems just a wee bit aggressive to be > setting the total shared memory to be RAM + swap! MOS doc 301830.1 gives > the recommendation of the SUM SGAs on the server, so in this case 4456448 > (17 GB). > > Sizing shmall to be the SUM of the SGAs on the server makes sense, but I'm > curious if anyone has experience that suggests otherwise? Also, can anyone > think of a good reason to set shmall to RAM + swap (other than "'cuz it's > there")? > > Thx. > > Dave Herring | DBA, Acxiom Database Services > > 630-944-4762 office | 630-430-5988 cell | 630-944-4989 fax > 1501 Opus Pl | Downers Grove, IL, 60515 | U.S.A. | www.acxiom.com > Service Desk: 888-243-4566 > > > > *************************************************************************** > The information contained in this communication is confidential, is > intended only for the use of the recipient named above, and may be legally > privileged. > > If the reader of this message is not the intended recipient, you are > hereby notified that any dissemination, distribution or copying of this > communication is strictly prohibited. > > If you have received this communication in error, please resend this > communication to the sender and delete the original message or any copy > of it from your computer system. > > Thank You. > > **************************************************************************** > > -- > //www.freelists.org/webpage/oracle-l > > >