We currently have multiple Oracle 10.2.0.3 and 10.2.0.4 along with a 2 node RAC cluster working on Netapp. Our standard connections for our RAC installation is na7g1:/vol/rac_oradata - /mnt_na7g1_rac_oradata nfs - yes hard,vers=3,suid,nointr,proto=tcp,rsize=32768,wsize=32786,bg,noac,timeo= 600 While or standard connections for non-rac installations are na9g2:/vol/ttcp - /mnt_na9g2_ttcp nfs - yes hard,vers=3,suid,intr,proto=tcp,rsize=32768,wsize=32768 For non-rac setups, we use their volume software so that each of our databases is in its own volume and therefore do not affect any other instance if some issue arrises. We do not use Rman. We have an archive script and a backup script that incorporates the Netapp Snapshots to give us our backup protection along with Standby databases at a seperate colocation. The archive script pushes logs and applies to the standbys every 15 minutes. The databases undergo a hot backup every night where the tablespaces are put into backup mode and then the Netapp Snapshot occures. We have sucessfully used these hot backups to recover. We also haved a cold backup happening every week where the dbs are brought down (again, only need one down at a time since they are in their own netapp volume) and snapshotted, For the rac setups, we currently just use a big hammer .... i.e. we bring down the instances and asm and take a snapshot. Very Very ugly and old-school. Working on something beter . (but then, its only a test instance so we are not in a hurry to change yet). ________________________________ From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx] On Behalf Of Thomas Roach Sent: Monday, April 19, 2010 1:12 PM To: Oracle Discussion List Subject: Oracle over NFS Hi, We are currently on Oracle RAC 10.2.0.4 (4 nodes) on Linux x86_64 RHEL 5.3. We are currently using block devices on Hitachi but have invested in a NetApp. Our 2 options are NFS (which the Unix admin favors) and Block devices over Fiber Channel. In trying the various NFS option, we have come across 2 mount options that Oracle must use. (noac and actimeo=0). With noac, the performance is horrible (about 10 MB/s) and with actimeo=0 we get about 240 MB/s which is the maximum bandwidth the interface will support (3 - 1Gbps Nics on the NetApp side). With the actimeo=0, we get better performance but see a large amount of network discards. He has essentially said I am putting it back to noac to get rid of the network discards. I said if the performance goes back to 10 MB/s then we can't move to NFS over NetApp. I wanted to get some insight from the group as I get conflicting information when researching this issue. Does anyone out there have Oracle RAC running over NFS to NetApp and if so, what mount options are you using? Any specific configurations on the NetApp side? Forgot to mention that we are using a single Cisco switch between the NetApp and Oracle servers running with jumbo frames. Thanks! Tom This e-mail, including attachments, may include confidential and/or proprietary information, and may be used only by the person or entity to which it is addressed. If the reader of this e-mail is not the intended recipient or his or her authorized agent, the reader is hereby notified that any dissemination, distribution or copying of this e-mail is prohibited. If you have received this e-mail in error, please notify the sender by replying to this message and delete this e-mail immediately.