Re: ASM "disk" replacement

  • From: Mark Strickland <strickland.mark@xxxxxxxxx>
  • To: DonGranaman@xxxxxxxxxxxxxxx
  • Date: Thu, 7 May 2009 14:17:08 -0700

We're doing this exact thing right now.  Moving to a different storage
array.  It's going quite smoothly, although we're doing the adds and drops
in two steps.  The only nasty surprise we've experienced is that we took
NetApp snapshots before starting the drops and the snapshots were located on
the same filer.  They gradually grew in size and the NFS mount filled up.
The ASM disks got corrupted and we had to restore the snapshot and start
again.  As this was the standby, it didn't cause a huge amount of stress.
We'll soon be doing the primaries.  Your database size is around the same as
the databases we're moving and the rebalances for us are taking just a few
hours.

-Mark



On Thu, May 7, 2009 at 12:57 PM, Don Granaman
<DonGranaman@xxxxxxxxxxxxxxx>wrote:

>  We are using ASM for 10.2.0.4 RAC on Linux x86-64 and will need to
> replace a number of disks in the storage array.
>
>
>
> I am wondering if anyone here has had any “interesting” experience with
> this sort of thing – good or bad.
>
>
>
> This can also be a sanity check on my embryonic “plan”.  I just heard about
> the need for this today and need to come up with a plan.  We don’t have to
> do this soon, but it absolutely must be done well before the end of the
> year.
>
>
>
> This may be complicated (or simplified) a bit by the fact that we have a
> sort of “minimal ASM” setup.  (“New-fangled gizmos anywho – don’t entirely
> trust ‘em.” – the OraSaurus in me says.)
>
>
>
> The basics:
>
>
>
> Only datafiles and tempfiles are on ASM - in a single diskgroup.
>
> Redo is on “real” raw devices, not ASM.  Archive destinations on OCFS2
> filesystem.  RMAN backups to NAS.
>
> 2.5+ TB database – hot, mostly with massive 24xforever inserts.  (This
> beast generates 200+ GB of archive on a slow day.)
>
> The six existing ASM “disks” are actually RAID-10 sets striped and mirrored
> in hardware – about 500 GB each.
>
> Only “external redundancy” in ASM.
>
> Physical (exclusive) standby also on ASM – different hardware and location,
> but same layout, but a different diskgroup name.
>
> [Network link between primary and standby is rather busy – bandwidth about
> 50% utilized just by log shipping.]
>
> Notable downtime is not an option.
>
>
>
> In theory, this should be easy (famous last words) – alter diskgroup … add
> <some new “disks”> drop <some old “disks”>;
>
>
>
> Adding the new replacement “disks” (with similarly sized/performing
> hardware RAID-10 sets) and dropping the old ones in a single “alter
> diskgroup” command **should** minimize the ASM rebalancing.
>
>
>
> We would likely do this first on the standby, then on the primary once the
> standby “reorg” is complete.
>
> [Hopefully, we will have a complete production mirror – RAC, standby and
> all, with somewhat weaker hardware though – to test all this on first.]
>
>
>
> So, does anyone have any relevant experiences, horror stories, caveats,
> hearsay, et cetera to share?
>
>
>
> Thanks for any info!
>
> Don Granaman
>

Other related posts: