RE: Oracle 10g R2 RAC on Sun Solaris 10 with EMC storage

 >>>
>>>* ) What is the best clusterware for this configuration 
>>>(Veritas cluster / Sun Cluster / Oracle Clusterware). Are 
>>>there any risks if I don't use veritas cluster / sun cluster ware.

Oracle clusterware reboots servers for "fencing". If you
like that, don't use the VCS or Sun Cluster skgxn libraries.
This is a platform that offers clusterware choices at least.
Be happy. Linux and windows ports of Oracle do not integrate
with host clusterware.

There is a reason Oracle 10gR2 still integrates with host
clusterware on "real servers" (e.g., Sol,HP/UX,AIX). The
reason is that CRS is not fully baked.


>>>
>>>* ) Is cluster filesystem a compulsory component or Can I 
>>>use ASM instead of Cluster File system.

Yes you can use RAW disk. That is what ASM is.


>>>* ) Are there any known risks invovled in using ASM. How is 
>>>the I/O performance with ASM on EMC with Solaris ? what are 
>>>the risks involved

ASM I/O is on par with RAW. Guess why. Because ASM is RAW.
Risk?  ASM is a new, late comer to volume management. It 
requires functional Oracle instances to "see" the contents
of the space it manages. ASM is only for database objects
to exclude external tables. Everything else requires a
filesystem and in a cluster, it makes sense for that
to be a cluster filesystem...but that is an opinion--albiet
one shared by a lot of people (VAXen were pretty popular). 


>>>
>>>* ) If I don't use cluster filesystem where to put CRS 
>>>repository and voting disk ? Do I have any option other than 
>>>raw partition

no. RAW is your only alternative.


>>>
>>>* ) What is best option for Oracle_Home, shared oracle home 
>>>or sepereate oracle_home on each node ?

Religious wars ensue. I say shared. How much duplicated effort
you like is your choice to make. There is no such thing
as a rolling upgrade with RAC, so ignore the FUD. 


>>>* ) Is GigE okay for interconnect or do I need to go for Infiniband ?

Depends on the load. High speed interconnects (e.g., Infiniband, SCI,
etc)
are most interesting in tough scalability scenarios. Those
scenarios are generally processor bound. Unfortunately, CPU bound
systems do not benefit at all from high speed interconect. Messages
are delivered faster, but the message receiver is piled up behind
other runable processes. Typical cluster scalability problem.


>>>
>>>* ) Is there any notes on Best practices for the above components

Most likely not.


>>>
>>>*) Do I need to consider fail over option for NIC's 
>>>(interconnect and public), if yes, how to do that ?

Seems you are spending a tremendous amount of money on
RAC, it might make sense to pay for HW redundancy as well.


>>>*) Are there any other risks do I need to consider ?

Yes. Most certainly. 

--
http://www.freelists.org/webpage/oracle-l


Other related posts: