Re: RMAN Backup Question - FILESPERSET

  • From: "Mark Strickland" <strickland.mark@xxxxxxxxx>
  • To: oracle-l@xxxxxxxxxxxxx
  • Date: Tue, 30 Jan 2007 08:42:15 -0800

This is a followup to explain what I discovered through my testing and with
confirmation from Oracle Support.  When RMAN calculates how many files to
put in each backupset, it's based on the size of the datafiles, not the
amount from each file that will actually get backed up.  Even if FILESPERSET
is set higher, MAXSETSIZE will still limit the size of the backupsets.  For
my level 1 incremental backups, MAXSETSIZE was set to 6-GB and RMAN was
putting just 2 or 3 files in each backupset.  This resulted in about 330
backupsets with many of those being just 20K in size.  Backing up that mess
to tape took 4-1/2 hours.  I'm now running the level 1s with MAXSETSIZE set
to 64-GB and RMAN puts around 15 files in each backupset.  I end up with
around 30 backupsets with a few being 70K and the largest being  330-MB.
That only takes about 40 minutes to back up to tape.  MUCH better.  I'm
going to watch the backups this week, then probably raise MAXSETSIZE to
128-GB.  I don't actually want backupsets that are that large, but in this
case, I think there's very little risk of that happening.  It would require
25 of my 5-GB datafiles to have most or all of their blocks modified to
result in a backupset that large.  Not going to happen (famous last words).
There is no bulk loading of data and no batch jobs that modify large amounts
of data.  So RMAN can use the Block Change Tracking file to determine what
to back up but not to calculate how many files to put in each backupset.
Pity.

Regards,
Mark Strickland
Seattle, WA


On 1/24/07, Mark Strickland <strickland.mark@xxxxxxxxx> wrote:

Oracle 10.1.0.5 on Solaris 9.  Using block change tracking file and
flash_recovery_area.

The incremental level 1 backups of one of our databases produce a large
number of very small backupsets, about 20K each.  This causes the backup of
the flash_recovery_area to tape to take around 4-1/2 hours whereas the
backup to the flash_recovery_area beforehand takes around 15 minutes.  There
are 378 datafiles and many of them never get modified (old partitions).  The
backupsets appear to always have 3-4 datafiles and the backupsets for those
never-modified files are very small.  FILESPERSET is not set, but MAXSETSIZE
is set to 6000M.  There are four channels.  I've pored over the manuals and
Robert Freeman's fine RMAN book, along with Metalink and Google, and it
seems there is conflicting information about the default value for
FILESPERSET.  In one place in the docs, it says that

"The number of files to be backed up is divided by the number of channels.
If the result is less than 64, then it is the number of files placed in each
backupset. Otherwise, 64 files will be placed in each backupset."

Robert Freeman appears to agree with that.  In my case that would be the
lesser of 378/4 or 64, so 64.  In other places in the docs, it says that the
default maximum number of files that will go into a backupset is 4.  That
pretty much matches what I'm seeing in my environment.  I've been
experimenting with FILESPERSET today in a test environment and I'm able to
set it to different values and observe that the backupsets do contain that
many files.  Therefore, on Friday evening, I plan to change FILESPERSET to
64 in Production and run the incremental level 1 backup and see what
happens.  If it fails for some reason, I can easily re-run it with the
default.

Another oddity is that for our level 0 backups, we have MAXSETSIZE set to
10G but I see a few backupsets that exceed that size, like 20G.  Not worried
about that, just curious.

Any insights?

Regards,
Mark Strickland


Other related posts: