Re: Data Guard Rebuild

  • From: Kenny Payton <k3nnyp@xxxxxxxxx>
  • To: "Mark W. Farnham" <mwf@xxxxxxxx>
  • Date: Fri, 13 Jun 2014 08:07:06 -0400

Thank Mark.  I’ve made some comments inline.  This is the dialogue I was hoping 
for.  I’m more convinced now an easy alternative might not exist.


On Jun 12, 2014, at 10:52 PM, Mark W. Farnham <mwf@xxxxxxxx> wrote:

> The next thing that comes to mind is archiving to multiple locations. That 
> should save you the construction of the shell and the retrieve and send lag. 
> When you find out you need to do a new instantiation, just start archiving 
> additionally to the destination machine and by the time your database files 
> arrive you’ll have all the logs you need there.
>  
> That’s all described (including remote destinations) 
> inhttp://docs.oracle.com/cd/E11882_01/server.112/e25494/archredo.htm
>  

The remote destination is what I’m leveraging with my shell instance.  


> Since you won’t have a service destination set up yet unless you create a 
> destination shell (which I suppose is okay the way you are doing it), but you 
> *could* make the duplicates local outside of ASM so you can transport them 
> with OS tools.

A viable option.


>  
> I guess the enhancement request to Oracle would be the ability to set up a 
> listener on a remote server that does not have a database yet with a simple 
> service to receive archived redo logs.
>  
> (Some one clue me in if you can already do that and I missed the memo.)

This was what I was hoping to learn about.  It sounds like the easy button 
doesn’t exist.
>  
> If you have a big database, shipping a cabinet would have a lot higher 
> bandwidth than 1ge. But size is relative. If you’ve got sufficient WAN 
> headroom to do the transfers and physical transport of media costs extra, 
> then you’re probably doing it correctly. What you’ve done is slick, but it 
> gives me the heebee jeebees to issue offline drop commands unless I really 
> must.

Our databases range from 10T-25T.  At 25T we’re probably getting close to break 
even with this approach, it generally takes 4-5 days for the largest.  Some 
congestion in the pipe but we generally max it out just above 100MB/Sec, that’s 
2.7 days for 25T running on its own.  Loading a storage device, shipping it, 
scheduling someone at the colo to connect it and then offloading the data would 
probably take a few days on its own.


>  
> If you ship the utility tablespaces first (SYSTEM, SYSAUX, etc. and the 
> pieces you’re already sending), I’m not clear why you need to do the offline 
> DROP to get that started up. Offline should be enough. Just don’t start 
> applying redologs until you’ve got the files all there to bring the 
> tablespaces and the datafiles that compose them back on line. (Am I missing 
> something?)
>  

You’re probably correct in my tmp instance I might be able to offline the 
datafiles instead of offline drop them.  Attempting to break up the actual 
duplication process would probably be a good bit more effort than creating the 
tmp instance.  Currently I let RMAN and the duplicate database command do the 
heavy lifting.  Originally I thought once RMAN mounted the database during the 
duplication transport services would work but no such luck.

> mwf
>  
> From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx] On 
> Behalf Of Kenny Payton
> Sent: Thursday, June 12, 2014 9:55 PM
> To: Mark W. Farnham
> Cc: oracle-l@xxxxxxxxxxxxx; Jeremy Schneider
> Subject: RE: Data Guard Rebuild
>  
> Yeah, I left a few things out I'm sure but was more curious about what others 
> were doing to overcome the situation.   I suspected others that had,  and 
> worked around,  the same problem and overcame it already knew my pain.
> 
> A process to sftp, or better yet rsync, could be an alternative but since we 
> are ASM ( missing info) it's a little more complicated.
> 
> The rebuilds generally complete within 1 to 2 days in the worst of scenarios. 
>   Even at overnight shipping of an external drive it would be hard to beat 
> our 1ge link (probably more information you were asking for) even at 1T (more 
> info)  of archived logs.  We are also shipping to a remote data center that 
> costs us for boots on the ground.
> 
> btw, this is an 11gR2 environment.
> 
> Kenny
> 
> On Jun 12, 2014 6:06 PM, "Mark W. Farnham" <mwf@xxxxxxxx> wrote:
> I guess I’d copy the archived redo logs to a removable device before your 
> primary backup and ship the device.
> Of course more information would be required about your situation to know 
> whether that is appropriate.
> It does seem unreasonable to tie up your WAN for multiple days shipping the 
> files. It may be easier to ship them also.
> The bandwidth of media on a bus or plane can be incredible. As long as you’re 
> okay with the latency it is quite reliable.
>  
> (paraphrase of Gorman and me, separately, circa 1990)
>  
> Now, if physical media transport does not fit your world and the WAN is your 
> only route:
>  
> Your work-around seems to still ship all the archived redo logs. If you’re 
> doing that anyway, why not just use some sftp equivalent and tell the 
> recovery process where they live?
>  
> They will be in competition with the datafile transport, but you just need to 
> keep ahead of your delete window, so I don’t see the problem since the same 
> seems to be true with Oracle’s transport mechanism.
>  
> mwf
>  
> From: oracle-l-bounce@xxxxxxxxxxxxx [mailto:oracle-l-bounce@xxxxxxxxxxxxx] On 
> Behalf Of Kenny Payton
> Sent: Thursday, June 12, 2014 12:14 PM
> To: Jeremy Schneider
> Cc: ORACLE-L
> Subject: Re: Data Guard Rebuild
>  
> It works.  I came up with a couple tricks for this.  One was to automatically 
> generate the register statements and execute them on the standby once the 
> rebuild completes.  Another was to share the same destination on the primary. 
>  That way when the rebuild is complete the destination is deferred, updated 
> to new target and then restarted.  That way you minimize duplicate shipping 
> of logs.
>  
> I still wish I could get the logs to ship while the rebuild is running.
>  
> Kenny
>  
>  
>  
> On Jun 12, 2014, at 12:08 PM, Jeremy Schneider 
> <jeremy.schneider@xxxxxxxxxxxxxx> wrote:
>  
> 
> That's a pretty cool workaround actually.  I don't have a good solution; I 
> usually somehow find space to temporarily keep a lot of archivelogs online 
> until I get the standby setup - and I watch it closely in the meantime.  Or 
> else I do archivelog restores on the primary after the shipping is setup and 
> keep resolving gaps until it's caught up.  I might try your idea.
>  
> -J
> 
> --
> http://about.me/jeremy_schneider
>  
> 
> On Thu, Jun 12, 2014 at 9:44 AM, Kenny Payton <k3nnyp@xxxxxxxxx> wrote:
> I create/rebuild standby databases from time to time and when they are large 
> and going across our WAN they can take more than a day to complete, sometimes 
> multiple days.  These databases also have a high churn rate and generate a 
> large amount of redo.  Our normal backup processes delete the archived logs 
> on the primary prior to completion of the standby and require restoring them 
> on the primary after the managed recovery begins so that they can ship to the 
> standby.  We do not reserve enough space to keep multiple days of archived 
> logs around.
> 
> I’m curious if anyone has another work around to this.
> 
> I have one that requires creating the shell of the database by restoring the 
> control files and offline dropping all data files and configuring transport 
> to ship logs to it.  Once managed recover starts I just register the log 
> files that have shipped so far and switch the transport service on the 
> primary.  This is a little clunky and fairly manual at this point and I would 
> love an easier approach.
> 
> Thanks,
> Kenny--
> //www.freelists.org/webpage/oracle-l
> 

Other related posts: