RE: Oracle RAC backup hardware/software recommendation?

  • From: manuela mueller <mueller_m@xxxxxxxxxxxxx>
  • To: oracle-l@xxxxxxxxxxxxx
  • Date: Fri, 30 Sep 2005 09:49:15 +0200

Dear all,
thanks for your contributions to this thread so far, very interesting
discussion.

One note about the recover scenario Chris mentioned.
I totally agree with the problems you are likely to face if you run
RMAN-MML clients on more than one node (which one probably does in a RAC
environment).
Your life may be a bit easier if you can put (archived) redo log files
on OCFS.

There's a document at metalink 'OCFS Best Practives' which deals with
files you can put on OCFS:
URL:
http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=237997.1

<quote>
3. File Types Supported by OCFS


      At this time (version 1.0.9), OCFS only supports Oracle data files
      - this includes redo log files, archive log files, controlfiles
      and datafiles. OCFS also supports the Oracle Cluster Manager (OCM)
      shared quorum disk file and shared Server Configuration file (for
      svrctl). Support for shared Oracle Home installation is not
      currently supported, but expected in the latter part of 2003 (OCFS
      v2.x?).

</quote>

Unfortunately this pargraph does not cover the fragmentation issues with
OCFS.
A bit later on the same document:

<quote>
7. Defragmentation
Given the extent-based allocation of volume metadata and user file data
clusters, it is possible for disk fragmentation to occur. The following
guidelines list measures to prevent volume fragmentation:

OCFS requires contiguous space on disk for initial datafile creation
e.g. if you create a 1Gb datafile (in one command), it requires 1Gb of
contiguous space on disk. If you then extend the datafile by 100Mb, it
requires another 100Mb chunk of contiguous disk space. However, the
100Mb chunk need not fall right behind the 1Gb chunk.
- Avoid heavy, concurrent new file creation, deletion or extension,
particularly from multiple nodes to the same partition.
-Attempt to correctly size Oracle datafiles before creation (including
adding new datafile to tablespaces), ensuring to allow for more than
adequate growth.
-Use a consistent extent/next extent size for all tablespaces in order
to prevent partition fragmentation (where datafiles are autoextensible).
-Separate data and index datafiles across separate OCFS partitions.
-Separate archive logs and redo log files across separate OCFS partitions.
-Where possible, avoid enabling datafile autoextensibility. Statically
sized datafiles are ideal to avoid defragmentation. Autoextensibility is
acceptable as long as large next extents are configured. -
Where possible, use Recovery Manager (RMAN), particularly restoration -
RMAN writes in o_direct mode by default.

</quote>

Another document at metalink 'RAC FAQ':
<quote>

What files can I put on Linux OCFS?
- Datafiles
- Control Files
- Redo Logs
- Archive Logs
- Shared Configuration File (OCR)
- Quorum / Voting File
- SPFILE
/Modified: 14-AUG-03    Ref #: ID-4156/
</quote>

What should be the conclusion?
Put redo logs and archived redo logs on OCFS or not??
This question was 2 years ago repeatedly asked in the metalink RMAN
forum, but not directly answered.

Have a nice day
Manuela Mueller



>But yes this is exactly what I mean.
>Its hit the DBA smack in the face when one tries to
>        RMAN>restore archive log all;
>in a RAC environment where the "local" arch logs are backed up
independently on each server (instance) in the RMAN backup
session/script.  The logs belong to one database, but two MML clients.
>The RMAN-MML restore session will blow up because it is restoring the
logs for one client only at a time, but the RMAN command was for ALL
logs (from any instance...incompatible).
>I would set up each TDPO client database server to be able to "spoof"
the other RAC server at anytime.
>That meant duplicate config files on each client RAC server.  So at
anytime I could restore (RMAN) backup from node A to node A, and backup
from node B to node A...thus getting all of my arch logs file in a failure.
>
>You don't want to  *learn* this in the heat of battle, but not is not
intuitive during the set up.
>Again, #1 Test, Test, Test complete and total loss database restores. 
Then and only them "missed" issue become obvious.
>
>Thus a strong argument can be made for cluster filesystem for arch logs
(Which ocfs does not support/recommend).

--
//www.freelists.org/webpage/oracle-l

Other related posts: