RE: cross platform migration

  • From: max scalf <oracle.blog3@xxxxxxxxx>
  • To: kyle Hailey <kylelf@xxxxxxxxx>
  • Date: Thu, 27 Mar 2014 16:57:11 -0700

  Hi Kyle,

That is correct shareplex is certified by SAP but DBvisit isn't yet.

Delphix sounds very interesting.  I am going to look into it right away.
Is it platform independent?

Another reason why we are trying to do a POC with shateplex(and not golden
gate) is because our old DB(10.2.0.4, db size are from 20 - 60 tb) are
sitting on hp-ux pa risc 11.11 and oracle dose not support(golden gate)
that anymore but shareplex dose.  So I am hoping Delphix dose that as
well.  We are moving from hp-ux pa risc 11.11 to rhel (db version is going
to be same due to some SAP kernal conflict).

Thank you
 ------------------------------
From: kyle Hailey <kylelf@xxxxxxxxx>
Sent: 3/27/2014 6:01 PM
To: max scalf <oracle.blog3@xxxxxxxxx>
Cc: Jinwen Zou <zjworacle@xxxxxxxxx>; Svetoslav Gyurov
<softice@xxxxxxxxx>; Jack
van Zanen <jack@xxxxxxxxxxxx>; oracle-l@xxxxxxxxxxxxx
Subject: Re: cross platform migration



Delphix is a Agile Data Platform. What does that mean???  It means that
Delphix is all about getting data to the right place quickly, easily and
for low overhead (i.e. agile = easy & fast ). Delphix enables cloning of
the data many times in minutes for almost no storage overhead and it works
for databases as well as application data.

Shareplex is tool that reads Oracle redo logs and can replicate out the
changes. Quite different. Why you would need Shareplex if you have DBvisit?
Is Shareplex certified with SAP but DBVisit isn't?

For more on Delphix see http://kylehailey.com/delphix

Delphix is software that installs as a VM under VMware. Delphix connects to
a source database and pulls in history of changes into a time window of
data. Using that data on Delphix, clones of the source can be made in
minutes onto other machines by externalizing the datafiles via NFS. It's
all a few clicks of a mouse in a GUI.
With a virtual clone there is no data copy. A virtual clone initial just
sees the data -pre-existing data on Delphix. As the clone makes
modifications those modifications are are also saved on Delphix but in a
different location than the original and visible only by the clone that
made the change.
Virtualized data means duplicate blocks are shared. If a block is modified
it's written elsewhere and kept private to the modifier.
VDB = virtual database, i.e. a database who's datafiles are virtualized on
Delphix.

Delphix:

   - *Agile Data - *as stated above
   - *Cloud ready* - replicate 100s of VDBs into the cloud for fractions of
   the network bandwidth for SQL Server, Oracle, Postgres, Sybase (beta) and
   others coming such as MySQL
   - *Fully Automated* - everything is a few clicks of a mouse. On hardware
   and networks that are ready, it takes 5 minutes to install, 5 minutes to
   configure. Initial link can run over night fully automated and the next day
   you can spin up VDBs in minutes.
   - *Open Stack* - support any data and databases (only supported
   databases come with complete automation) on any storage and on Linux,
   Windows, HPUX, Solaris and AIX
   - *Elastic compute - * spin up VBDs anywhere on the network in minutes,
   move VBDs, consolidate VDBs onto few machines
   - *Audit ready* - live archive database versions for Sarbanes Oxley
   - *Version Controlled Data* - tag, branch, rollback, refresh data and
   databases. Have a data control system connecting code versions with data
   versions for a fraction of storage required for full copies and it's all
   automated. Can be run by a developer.
   - *Open Stack Migration Option *- automates migrating Oracle from Unix
   systems onto Linux, this is huge


- Kyle
http://kylehailey.com




On Thu, Mar 27, 2014 at 3:16 PM, max scalf <oracle.blog3@xxxxxxxxx> wrote:

>  Kyle,
>
> Thanks for your input.  I didn't knew delphix is also SAP certified.  We
> are looking into dell sharepalex to do a POC.  Would have happened to know
> anything about that product.  If so what are some pro/cons compared to
> delphix?
>
> I am going to look into Delphix as well.  Thanks for point me in that
> direction.
>
> Thank you
>  ------------------------------
> From: kyle Hailey <kylelf@xxxxxxxxx>
> Sent: 3/27/2014 1:43 PM
> To: oracle.blog3 <oracle.blog3@xxxxxxxxx>
> Cc: Jinwen Zou <zjworacle@xxxxxxxxx>; Svetoslav Gyurov <softice@xxxxxxxxx>;
> Jack van Zanen <jack@xxxxxxxxxxxx>; oracle-l@xxxxxxxxxxxxx
>
> Subject: Re: cross platform migration
>
>
> PS on the SAP part, Delphix is an Endorsed SAP Business Solution and many
> of Delphix customers use Delphix for SAP
>
> - Kyle
> http://kylehailey.com
>
>
>
> On Thu, Mar 27, 2014 at 11:23 AM, kyle Hailey <kylelf@xxxxxxxxx> wrote:
>
>>
>> FYI Delphix offers an automated cross platform conversion that only takes
>> 1% extra storage on top of the original source database copy. The source
>> never needs to be stopped or put in read only (as is the typical constraint
>> withOracle RMAN method). With Delphix you can have both the original and
>> the cross platform  in the space about 1/3 of the original due to
>> compression  and vector mapping of the endianess switches.
>>
>> For details see
>>  
>> http://www.oraclerealworld.com/oracle-cross-platform-provisioning-magic-from-the-mess/<http://www.oraclerealworld.com/oracle-cross-platform-provisioning-magic-from-the-mess/>
>>
>> which outlines how to manually convert the database with RMAN and how
>> Delphix automates and streamlines the process.
>> If you want to manually convert the database, the two links at the bottom
>> of the blog are awesome. One from Oracle and one from DB Specialists.
>>
>> Cross platform conversion is much easier when endianess is the same.
>> There is an RMAN convert database command.
>> For cross platform conversion when the endianess changes, there is no
>> convert database command. You have to convert datafile by datafile.
>> You have  to create a new database and you have to extract all the
>> important data from system tablespace like users, procedures, grants etc.
>> Bit of a pain,
>>
>> - Kyle
>> http://kylehailey.com
>>
>>
>>
>>
>>
>>
>> On Mon, Feb 17, 2014 at 4:55 AM, max scalf <oracle.blog3@xxxxxxxxx>wrote:
>>
>>> @Jinwen,
>>>
>>> Sad part is we do own licensing for another replication tool called
>>> DBVisit and have used that for migration for other system(non-sap system),
>>> but as these are SAP system the only way SAP supports its cross-platform
>>> migration(oracle-way) is by either doing cross platform transportable
>>> tablespace or using oracle's Golden gate.  No other products are supported
>>>
>>> @oscar,
>>>
>>> Thanks for looking at that, i always tend to forget to check at
>>> edelivery.oracle.com.  Looks like i will open a ticket with oracle and
>>> hopefully they can get me the software...otherwise moving 30TB of data
>>> using cross platform transportable tablespace will take long time...
>>>
>>>
>>> On Sun, Feb 16, 2014 at 9:48 PM, Jinwen Zou <zjworacle@xxxxxxxxx> wrote:
>>>
>>>> If you went down to logical apply path, some zero/real-time data
>>>> integration/replication rivals of Goldengate might worth to have a look.
>>>> Type "Oracle Goldengate vs " and plus another character [a-z] in Google
>>>> search, Google will  tip you the major rivals starting with that
>>>> character.
>>>>
>>>> Shareplex from Quest (bought by DELL) who owns TOAD for Oracle as well,
>>>> is one of the products used in one of my former company, it was used to
>>>> replicate databases across data centers  in high OLTP workload in same
>>>> platform and same oracle version. However, I don't think cross
>>>> platform/version will be problem.
>>>>
>>>>
>>>> On Mon, Feb 17, 2014 at 1:21 PM, max scalf <oracle.blog3@xxxxxxxxx>wrote:
>>>>
>>>>> hello all,
>>>>>
>>>>> Looks like GOLDEN GATE is in the picture for our migration now, but
>>>>> one thing i am confused about is the supported version...our
>>>>> database(version 10.2.0.4) is on hp-ux paric 11.11....and we are going to
>>>>> go to linux(DB version still the same)...so my question is, can i still
>>>>> install golden gate on hp-ux 11.11,  i am unable to find any golden gate
>>>>> version for that system or do i just have to open a ticket with oracle
>>>>> support and ask for a older version of Golden gate that support our
>>>>> 10.2.0.4 DB on HP 11.11......if i cannot use golden gate, what other
>>>>> options do i have?
>>>>>
>>>>>
>>>>> On Wed, Feb 12, 2014 at 7:29 PM, Svetoslav Gyurov 
>>>>> <softice@xxxxxxxxx>wrote:
>>>>>
>>>>>> Hi Max,
>>>>>>
>>>>>> Sorry for not making clear. It's simply because you won't be able to
>>>>>> mount the filesystem on the "other side" once you split the BCV. The CDC 
>>>>>> is
>>>>>> a feature of the Veritas Volume Manager which provides you with a
>>>>>> foundation for moving data between different systems within a 
>>>>>> heterogeneous
>>>>>> environment. Just type *sfhas_solutions_601_lin* in google and
>>>>>> download the first pdf which is *Veritas Storage Foundation(tm) and
>>>>>> High Availability Solutions 6.0.1 Solutions Guide - Linux*
>>>>>>
>>>>>> Sve
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Thu, Feb 13, 2014 at 1:13 AM, Jack van Zanen <jack@xxxxxxxxxxxx>wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>>
>>>>>>> One other gotcha is that Oracle RMAN convert can not handle clusters
>>>>>>> with endian conversions.
>>>>>>> We are working with support at the moment to get a fix for it, but
>>>>>>> it has been a while now...
>>>>>>>
>>>>>>> Jack
>>>>>>>
>>>>>>> Jack van Zanen
>>>>>>>
>>>>>>> -------------------------
>>>>>>> This e-mail and any attachments may contain confidential material
>>>>>>> for the sole use of the intended recipient. If you are not the intended
>>>>>>> recipient, please be aware that any disclosure, copying, distribution or
>>>>>>> use of this e-mail or any attachment is prohibited. If you have received
>>>>>>> this e-mail in error, please contact the sender and delete all copies.
>>>>>>> Thank you for your cooperation
>>>>>>>
>>>>>>>
>>>>>>> On Thu, Feb 13, 2014 at 12:09 PM, max scalf 
>>>>>>> <oracle.blog3@xxxxxxxxx>wrote:
>>>>>>>
>>>>>>>> Sve,
>>>>>>>>
>>>>>>>> Golden Gate is out of budget here and also SAP dose not support
>>>>>>>> that part.  We were told to make this happen within the current license
>>>>>>>> agreement.
>>>>>>>>
>>>>>>>> Please excuse my knowledge here about BCV splits.  But lets say i
>>>>>>>> do not have BCV splits in place my process would be to
>>>>>>>>
>>>>>>>> 1. put tablespace in read only mode
>>>>>>>> 2. export metadata(transportable tablespace=y)
>>>>>>>> 3. copy the export files and also copy the data files from source
>>>>>>>> to target
>>>>>>>> 4. run RMAN convert command on target
>>>>>>>> 5. import the medata
>>>>>>>>
>>>>>>>> so my question is why do i need that extra step of conversion the
>>>>>>>> endiness at storage level(CDS)...is that a standard thing?  Our storage
>>>>>>>> admins are the one who takes care of the BCV splits and i am hoping if 
>>>>>>>> we
>>>>>>>> are going to use the BCV splits then i dont need that CDS thing you 
>>>>>>>> talked
>>>>>>>> about earlier, or if we do, is that a standard tool given by storage
>>>>>>>> vendors(i believe our vendor is EMC) or do i need special licensing 
>>>>>>>> for it?
>>>>>>>>
>>>>>>>>
>>>>>>>> On Wed, Feb 12, 2014 at 6:55 PM, Svetoslav Gyurov <
>>>>>>>> softice@xxxxxxxxx> wrote:
>>>>>>>>
>>>>>>>>> Hi Max,
>>>>>>>>>
>>>>>>>>> Indeed, you need to convert the file system first which will save
>>>>>>>>> you copying all the 30TB of data and then you need to run RMAN 
>>>>>>>>> convert.
>>>>>>>>>
>>>>>>>>> Are you considering GoldenGate as an option or it would be out of
>>>>>>>>> budget ?
>>>>>>>>>
>>>>>>>>> Sve
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Thu, Feb 13, 2014 at 12:44 AM, max scalf <
>>>>>>>>> oracle.blog3@xxxxxxxxx> wrote:
>>>>>>>>>
>>>>>>>>>> Thanks Sve, I was under the impression that i could just take the
>>>>>>>>>> mount point from HP and mount it over to linux as i oracle was going 
>>>>>>>>>> to do
>>>>>>>>>> the RMAN Conversion process for me.  But you are saying i need to do 
>>>>>>>>>> it at
>>>>>>>>>> the storage level(CDS) and then also do it at database level ?
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Wed, Feb 12, 2014 at 5:06 PM, Svetoslav Gyurov <
>>>>>>>>>> softice@xxxxxxxxx> wrote:
>>>>>>>>>>
>>>>>>>>>>> Hi Max,
>>>>>>>>>>>
>>>>>>>>>>> Yeah, BCVs has been my favorite when we need to clone or refresh
>>>>>>>>>>> the DEV/UAT environments. However these platforms still have 
>>>>>>>>>>> different
>>>>>>>>>>> endianness and you need to convert the file system itself. This can 
>>>>>>>>>>> be done
>>>>>>>>>>> using the Cross-platform Data Sharing (CDS) featureof Symantec's 
>>>>>>>>>>> Veritas
>>>>>>>>>>> Storage Foundation software which will allow you to create portable 
>>>>>>>>>>> data
>>>>>>>>>>> containers (PDC) and mount the volumes on different platforms. I 
>>>>>>>>>>> remember
>>>>>>>>>>> seeing one or two years ago similar presentation (maybe OOW 
>>>>>>>>>>> presentations)
>>>>>>>>>>> about using this approach and greatly reducing the time for 
>>>>>>>>>>> migration.
>>>>>>>>>>>
>>>>>>>>>>> Regards,
>>>>>>>>>>> Sve
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Wed, Feb 12, 2014 at 10:42 PM, max scalf <
>>>>>>>>>>> oracle.blog3@xxxxxxxxx> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Sve/All,
>>>>>>>>>>>>
>>>>>>>>>>>> Thanks for you input.  i guess RMAN incremental
>>>>>>>>>>>> backup/restore/recover is out of the picture and so is data guard. 
>>>>>>>>>>>>  We are
>>>>>>>>>>>> going from 10.2.0.4 to 10.2.0.4 (due to some SAP kernel 
>>>>>>>>>>>> restrictions)
>>>>>>>>>>>>
>>>>>>>>>>>> For that 30TB database all that Data is usable(cannot be
>>>>>>>>>>>> purge/archived) so we have to move that to a platform, but one 
>>>>>>>>>>>> thing i can
>>>>>>>>>>>> think of is for the big database we do have BCV split/mirror in 
>>>>>>>>>>>> place, can
>>>>>>>>>>>> i somehow use that?
>>>>>>>>>>>>
>>>>>>>>>>>> For example
>>>>>>>>>>>> 1. on source DB put all tablespace in read only mode and start
>>>>>>>>>>>> meta data export(transportable tablespace=y)
>>>>>>>>>>>> 2. once in read only mode take a BCV split(in parallel) of all
>>>>>>>>>>>> the datafile mount points and mount it on target
>>>>>>>>>>>> 3. once the file system is mounted on target, start the RMAN
>>>>>>>>>>>> conversion process (how could would this take, is this depended on 
>>>>>>>>>>>> DB size
>>>>>>>>>>>> or what?)
>>>>>>>>>>>> 4. once conversion is completed, start the import of the
>>>>>>>>>>>> metadata
>>>>>>>>>>>>
>>>>>>>>>>>> if above can be used, only concern i have is we have probably
>>>>>>>>>>>> about 2k - 3K datafiles(spread across 100's of mount points) or so 
>>>>>>>>>>>> and i
>>>>>>>>>>>> might some how miss doing the convert process in RMAN for those 
>>>>>>>>>>>> data file
>>>>>>>>>>>> or miss them while doing the import part(where i believe i have to 
>>>>>>>>>>>> give
>>>>>>>>>>>> datafile = locations of all files)..any pointers here ?
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Wed, Feb 12, 2014 at 3:38 PM, Svetoslav Gyurov <
>>>>>>>>>>>> softice@xxxxxxxxx> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> Hi Max,
>>>>>>>>>>>>>
>>>>>>>>>>>>> My comments are inline, I assume you are migrating 10.2.X to
>>>>>>>>>>>>> 11.2.X ?
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Wed, Feb 12, 2014 at 8:54 PM, max scalf <
>>>>>>>>>>>>> oracle.blog3@xxxxxxxxx> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Hello List,
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I have a project that is going to get started soon and i
>>>>>>>>>>>>>> wanted to get some Pointers with regards to it.  Please excuse my
>>>>>>>>>>>>>> knowledge, as i am from SQL Server background and a seasonal 
>>>>>>>>>>>>>> oracle DBA.
>>>>>>>>>>>>>>  Project is to move our DB(multiple DB size from 1TB - 30TB) 
>>>>>>>>>>>>>> from hp-ux pa
>>>>>>>>>>>>>> risc to RHEL.  We have quite a few restriction in options as our 
>>>>>>>>>>>>>> app is SAP
>>>>>>>>>>>>>> :-( .  Couple of SAP notes i read suggested that we can use use 
>>>>>>>>>>>>>> cross
>>>>>>>>>>>>>> platform transportable tablespace, which is what i am planning 
>>>>>>>>>>>>>> to do as
>>>>>>>>>>>>>> well.  I wanted to find out couple of things from the list
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>    1. First of all if anyone has done this(on a SAP system),
>>>>>>>>>>>>>>    if so any gotcha
>>>>>>>>>>>>>>    2. To reduce the down time i was planing to do a restore
>>>>>>>>>>>>>>    ahead of the cut over(lets say 3 days in advance) and then 
>>>>>>>>>>>>>> keep applying
>>>>>>>>>>>>>>    archive log until the day of cut over.  Is that even possible 
>>>>>>>>>>>>>> for this
>>>>>>>>>>>>>>    situation (as i have to do RMAN Convert of the datafiles and 
>>>>>>>>>>>>>> then keep
>>>>>>>>>>>>>>    applying logs)
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Nope, you cannot restore/recover on a mixed platforms.
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>    1. i cannot do transportable DATABASE, as i am going from
>>>>>>>>>>>>>>    big endianess to little( i believe #2 is possible here, as i 
>>>>>>>>>>>>>> read this
>>>>>>>>>>>>>>    
>>>>>>>>>>>>>> doc<http://www.pythian.com/blog/howto-oracle-cross-platform-migration-with-minimal-downtime/>
>>>>>>>>>>>>>>    )
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Correct, they should be having the same same endian and you
>>>>>>>>>>>>> are migrating from Big to Little.
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>>    1. I read an option some place that mentioned i could
>>>>>>>>>>>>>>    have heterogeneous data guard setup for this migration, but 
>>>>>>>>>>>>>> when i read MOS
>>>>>>>>>>>>>>    doc ID  413484.1, i do not think hp ux to RHEL Data guard
>>>>>>>>>>>>>>    is supported or have i gotten that wrong.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Indeed, it is not:
>>>>>>>>>>>>> RMAN DUPLICATE/RESTORE/RECOVER Mixed Platform Support (Doc ID
>>>>>>>>>>>>> 1079563.1)
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>>    1. any other recommendation in general due to the size of
>>>>>>>>>>>>>>    our DB.  The one i am worried about is our 30TB DB which 
>>>>>>>>>>>>>> takes about 18
>>>>>>>>>>>>>>    hours to do weekly Level 0 backup and customer wants to do 
>>>>>>>>>>>>>> the migration in
>>>>>>>>>>>>>>    less than 10 hours
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Does all the 30TB db having operational data ? Are there any
>>>>>>>>>>>>> read only or archive tablespaces ?
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>>    1. is cross platform transportable tablespace a bad idea
>>>>>>>>>>>>>>    as SAP creates thousands and thousands of objects in the 
>>>>>>>>>>>>>> database and the
>>>>>>>>>>>>>>    metadata would be too much to export/import
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I can't think of a limitation of that one. You might export
>>>>>>>>>>>>> you metadata in parallel and also exclude statistics to improve 
>>>>>>>>>>>>> the time.
>>>>>>>>>>>>> You problem here would be the time it takes to copy 30TB over the 
>>>>>>>>>>>>> new
>>>>>>>>>>>>> platform and then convert them.
>>>>>>>>>>>>>
>>>>>>>>>>>>> GoldernGate of course is the holly grail. Quick look on MOS
>>>>>>>>>>>>> shows that HP-UX PARISC is supported platform for Oracle 
>>>>>>>>>>>>> GoldenGate
>>>>>>>>>>>>> 11.2.1.0.6.
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> Regards,
>>>>>>>>>>>>> Sve
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>> I would really appreciate some pointers.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>> Max
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Other related posts: