Re: 11g RedHat 5 and Hugepages? anybody successful ?

  • From: Andreas Piesk <a.piesk@xxxxxxx>
  • To: oracle-l@xxxxxxxxxxxxx
  • Date: Tue, 12 Oct 2010 00:24:43 +0200

Crisler, Jon schrieb:
> Don, I am way beyond that point.  If you set memory_max_target = 0 then
> start the db, it generates ORA-00849 is thrown.
> 
> ORA-00849  SGA_TARGET value was more than MEMORY_MAX_TARGET value. 
> 
>    If SGA_TARGET is unset , then I get ORA-27102: out of memory
> 
>                                         Linux-x86_64 Error: 28: No space
> left on device
> 

i've no problem with 11g RAC on RHEL5 with hugepages.

just to be sure, you start the instance with memory_target unset, 
memory_max_target unset and
sga_target=x, right?

i think, the instance does not use hugepages at all and tries to allocate the 
memory from the
default 4k pool which fails because of the allocated hugepage pool.

have you tried smaller values like 2G for sga_target (assuming you have some 
gigs left after
allocating huge pages)? as others said, creating a pfile for testing saves some 
time. please check
the memlock ulimit in your shell session before starting the instance.
if the instance starts with sqlplus, check from which pool the memory has been 
allocated
(/proc/meminfo or pmap)

also, there's a bug in CRS. so if you start the instance with srvctl, your 
memlock limit will not be
 used. the latest PSU was supposed to fix that, but the PSU, at least the CRS 
part, is broken and
doesn't patch the CRS files at all. in this case you have to patch ohasd 
manually. Oracle gave me
instructions in the SR i've opened for this issue, i can post them here if the 
bug could be the
cause of your problem.

if all fails, trace the startup of the instance with strace:

strace -f -e trace=shmget sqlplus / as sysdba

the 3rd argument, shmflg, should be SHM_HUGETLB.

regards,
-ap
--
//www.freelists.org/webpage/oracle-l


Other related posts: