If you backing up via RMAN then the piecesizes are controlled by rman, even if
you send it to other mml like commVault or netbackup or zdlra. I’ve worked with
all these mmls and in the end you still need to configure it at the Rman level
afaik.
Cheers,
Leng
On 19 Dec 2019, at 6:31 am, Keith Moore <keithmooredba@xxxxxxxxx> wrote:--
Yes, I am aware of that. What I am hoping to get answered is how this is
handled by other vendors of backup solutions.
Keith
On Dec 18, 2019, at 1:06 PM, Leng <lkaing@xxxxxxxxx> wrote:
Hi Keith,
You’ll need to play with filesperset or maxpiecesize or maxsetsize to get a
size that will work for you. Most often the default will not be useful if
you only want to restore a single small file from a large backuppiece.
Cheers,
Leng
On 19 Dec 2019, at 5:01 am, Keith Moore <keithmooredba@xxxxxxxxx> wrote:
I am working for a client that has an Exadata Cloud at customer. We just
migrated a large database and I am setting up backups. The backups go to
the Object Storage that is part of the Cloud at Customer environment and
backups and restores are done through a tape interface.
As part of the testing, I tried to restore a single 5 GB archivelog and
eventually killed it after around 12 hours.
After tracing and much back and forth with Oracle support, it was found
that the issue is related to filesperset. The archivelog was part of a
backup set with 45 archive logs and was around 500 GB in size. To restore
the archive log, the entire 500 GB has to downloaded, throwing away what is
not needed.
The obvious solution is to reduce filesperset to a low number.
But, my question for people with knowledge of other backup systems (hello
Mladen) is whether this is normal. It is horribly inefficient for
situations like this. Since object storage is “dumb”, maybe there is no
other option but it seems like this should be filtered on the storage end
rather than transferring everything over what is already a slow interface.
Keith --
//www.freelists.org/webpage/oracle-l