Re: Export/Import with Physical Standby

  • From: David Barbour <david.barbour1@xxxxxxxxx>
  • To: "Sweetser, Joe" <JSweetser@xxxxxxxx>
  • Date: Fri, 2 Oct 2009 11:10:05 -0400

If I shut down the application and there is no DML?

On Fri, Oct 2, 2009 at 11:07 AM, Sweetser, Joe <JSweetser@xxxxxxxx> wrote:

>  One thing that might have an impact depending on the volatility of the
> data as it is being exported….I have been burned by not specifying
> flashback_time, which is the data pump “equivalent” of export’s
> consistent=Y.   And you wouldn’t know there was an issue until you did the
> impdp.  The line in your parfile could look something like this:
>
>
>
> flashback_time="to_timestamp('28-09-2009 02:05:31', 'DD-MM-YYYY
> HH24:MI:SS')"
>
>
>
> *From:* oracle-l-bounce@xxxxxxxxxxxxx [mailto:
> oracle-l-bounce@xxxxxxxxxxxxx] *On Behalf Of *David Barbour
> *Sent:* Friday, October 02, 2009 8:59 AM
> *To:* Ian Cary
> *Cc:* oracle-l@xxxxxxxxxxxxx; oracle-l-bounce@xxxxxxxxxxxxx
> *Subject:* Re: Export/Import with Physical Standby
>
>
>
> Thank you.  I was wondering about that parameter.  I really haven't used
> datapump much - no real need and the exports I've done over the past three
> years or so have all been for 3rd party vendors who haven't been real
> comfortable with Oracle, let alone something 'new' like datapump.
>
> After doing some research, I think that is the way I'm going to tackle
> this.
>
> Regarding some eother questions/observations, I have identified the tables
> and scripted the whole thing with 6 parfiles (that pesky 3600K limit on the
> parfile size thing).
>
> By running parallel=2, I was able to export all the data in 25 minutes.  I
> can live with that.  I can't practice the import, not even on our
> development system, but I've got an 8-hour window for this.
>
> We have really good bandwith to our standby site.  We're using arch for
> apply, and have a three-hour apply delay (in case of corruption on the
> Primary).  We've got 16 redologs at 500MB each, and writing them to a RAID 1
> array with lots of high RPM drives (about the only thing that works really
> well in our SAN).  Deferring the send makes sense though.  I can (and have)
> caught up several hundred logs in a relatively short time.  For bigger lots,
> I've srcipted the compress/send/uncompress/register process so that if it
> becomes necessary .....
>
> So the new plan looks like:
>
> Stop the application - defer standby log dest
> Create the new tablespaces and datafiles --> this should propagate to the
> standby
> Export tables using datapump
> Drop tables and indexes --> this should propagate to the standby
> Import tables using datapump with remap_tablespace option --> this should
> propagate to the standby
> Start application and test
> Enable standby log dest
>
> Unless there are other issues, I really want to thank everyone for their
> input.  Everybody had good points and they forced me to re-evaluate and
> re-think the entire exercise looking at a variety of options.  With Oracle,
> there is generally more than one way to accomplish a given task.
>
> Much appreciated.
>
>
>
>  On Fri, Oct 2, 2009 at 4:57 AM, Ian Cary <ian.cary@xxxxxxxxxxxxxx> wrote:
>
> Just a couple of questions regarding you approach.
>
> 1. Why not use the REMAP_TABLESPACE option in datapump rather than having
> to extract the DDL and manually recreate the objects.
> 2. If you do use the DDL approach I'd suggest just creating the tables
> first, then import the data only and then create indexes/constraints as
> this should quicker.
>
> Another possiblty is also to use the COPY command which will cope with your
> LONG columns and then rename the objects afterwards rather than using
> expdp/impdp.
>
> Cheers,
>
> Ian
>
>
>
> |---------+----------------------------->
> |         |           david.barbour1@gma|
> |         |           il.com            |
> |         |           Sent by:          |
> |         |           oracle-l-bounce@fr|
> |         |           eelists.org       |
> |         |                             |
> |         |                             |
> |         |           01/10/2009 22:44  |
> |         |           Please respond to |
> |         |           david.barbour1    |
> |         |                             |
> |---------+----------------------------->
>
>  
> >--------------------------------------------------------------------------------------------------------------|
>  |
>                                      |
>  |       To:       oracle-l@xxxxxxxxxxxxx
>                                       |
>  |       cc:
>                                      |
>  |       Subject:  Export/Import with Physical Standby
>                                      |
>
>  
> >--------------------------------------------------------------------------------------------------------------|
>
>
>
>
>
> Good Morning,
>
> I've got a requirement to move about 300GB of table and index data from the
> current tablespaces into new tablespaces.  There is a physical standby in
> place.  The standby is located in another city.  The database is 7TB in
> size.  I am running Oracle 10.2.0.4 on AIX 6.1.
>
> The standby is currently mounted with the STANDBY_FILE_MANAGEMENT
> initialization parameter is set to AUTO.  We are not using ASM.
>
> I'd like to get a sanity check on the plan I've put together.
>
> Stop the application
> Create the new tablespaces and datafiles --> this should propagate to the
> standby
> Get table and index DDL using dbms_metadata.get_ddl
> Export tables using datapump
> Drop tables and indexes --> this should propagate to the standby
> Edit DDL to recreate tables and indexes in new tablespaces
> Run create scripts --> this should propagate to the standby
> Import tables using datapump --> this should propagate to the standby
>
> All tables will remain in the same schema.
>
> Comments appreciated.
>
>
>
>   This email was received from the INTERNET and scanned by the Government
> Secure Intranet anti-virus service supplied by Cable&Wireless in
> partnership with MessageLabs. (CCTM Certificate Number 2009/09/0052.) In
> case of problems, please call your organisation’s IT Helpdesk.
> Communications via the GSi may be automatically logged, monitored and/or
> recorded for legal purposes.
>
> For the latest data on the economy and society consult National Statistics
> at http://www.statistics.gov.uk
>
>
> *********************************************************************************
>
>
> Please Note:  Incoming and outgoing email messages are routinely monitored
> for compliance with our policy on the use of electronic communications
>
> *********************************************************************************
>
>
> Legal Disclaimer  :  Any views expressed by the sender of this message are
> not necessarily those of the Office for National Statistics
>
> *********************************************************************************
>
>
> The original of this email was scanned for viruses by the Government Secure
> Intranet virus scanning service supplied by Cable&Wireless in partnership
> with MessageLabs. (CCTM Certificate Number 2009/09/0052.) On leaving the GSi
> this email was certified virus free.
> Communications via the GSi may be automatically logged, monitored and/or
> recorded for legal purposes.
>
>
>

Other related posts: