Re: Oracle RAC nodes eviction question

  • From: Jeremy Schneider <jeremy.schneider@xxxxxxxxxxxxxx>
  • To: "Amir.Hameed@xxxxxxxxx" <Amir.Hameed@xxxxxxxxx>
  • Date: Tue, 19 Aug 2014 20:38:08 -0400

Old thread, I know. :) Just wanted to add a quick comment in response to this 
message - for this exact reason, I've become a proponent of always having GI on 
local storage (even in large environments).

I've been in situations where we had SAN problems and it was much more 
complicated to build good timelines because none of the GI logs were available.

-Jeremy

--
http://about.me/jeremy_schneider
Sent from my iPhone

> On Aug 13, 2014, at 5:38 PM, "Hameed, Amir" <Amir.Hameed@xxxxxxxxx> wrote:
> 
> Thanks Seth.
> Since log files are located inside the GI_HOME and therefore, they are also 
> on the NAS, there were no entries in the log files when NAS head failed over, 
> which is expected.
> From: Seth Miller [mailto:sethmiller.sm@xxxxxxxxx] 
> Sent: Wednesday, August 13, 2014 5:32 PM
> To: Hameed, Amir
> Cc: oracle-l@xxxxxxxxxxxxx
> Subject: Re: Oracle RAC nodes eviction question
>  
> Amir,
> 
> The first question is, why was the node evicted. The answer to that should be 
> pretty clear in the clusterware alert log. If the binaries go away for any 
> amount of time, cssdagent or cssdmonitor will likely see that as a hang and 
> initiate a reboot.
> 
> Seth Miller
> 
>  
> 
> On Wed, Aug 13, 2014 at 2:57 PM, Hameed, Amir <Amir.Hameed@xxxxxxxxx> wrote:
> Folks,
> I am trying to understand the behavior of an Oracle RAC Cluster if the Grid 
> and RAC binaries homes become unavailable while the Cluster and Oracle RAC 
> are running. The Grid version is 11.2.0.3 and the platform is Solaris 10. The 
> Oracle Grid and the Oracle RAC environments are on NAS with the database 
> configured with dNFS. The storage for Grid and RAC binaries are coming from 
> one NAS head whereas the OCR and Voting Disks (three of each) are spread over 
> three NAS heads so that in the event that one NAS head becomes unavailable, 
> the cluster can still access two voting disks. The recommendation for this 
> configuration came from the storage vendor and Oracle. What we observed was 
> that last weekend when the NAS head where the Grid and RAC binaries were 
> mounted from went down for a few minutes, all RAC nodes were rebooted even 
> though two voting disks were still accessible. In my destructive testing 
> about a year ago, one of the tests run was to pull all cables of NICs that 
> were used for kernel NFS on one of the RAC nodes but the cluster did not 
> evict that node. Any feedback will be appreciated.
>  
> Thanks,
> Amir
>  

Other related posts: