Re: Oracle Partitioning - 600GB Table

  • From: Seth Miller <sethmiller.sm@xxxxxxxxx>
  • To: Sameer Choudhari <sameer.choudhari@xxxxxxxxx>
  • Date: Fri, 12 Dec 2014 09:06:34 -0600

You will need the additional space (temporarily) regardless of what you do
unless you can take an extended outage to move it over row by row.

Seth Miller

On Fri, Dec 12, 2014 at 1:54 AM, Sameer Choudhari <
sameer.choudhari@xxxxxxxxx> wrote:

> Hi Seth
>
> Yes, you are correct. Initially I thought of using Karth's OPTION-1 but
> that also not feasible for my case. I am not confident enough to opt for
> OPTION-2 as well. I will try to test my ONLINE REDEFINITION option in dev
> box and implement in PROD. At this moment, I am not sure how long it will
> take to complete the task. Also, need to increase another 600GB as  
> DBMS_REDEFINITION
> creates MLOG$ . (as per Doc ID 1481558.1).
>
> Cheers,
> Sameer <http://www.choudhari.net>
>
> On 12 December 2014 at 09:48, Seth Miller <sethmiller.sm@xxxxxxxxx> wrote:
>>
>> Sameer,
>>
>> You mentioned that you define the new table structure with
>> DBMS_REDEFINITION but use imp-exp to load it. Is there a reason you don't
>> use DBMS_REDEFINITION to sync the tables?
>>
>> Assuming the data in this table is changing rapidly and is important to
>> the business, are you really going to rely on a matching row count to
>> determine that the data is synchronized between the two tables?
>>
>> I'm trying not to sound mean but you were on the right track to achieve
>> your task by using DBMS_REDEFINITION and then you jumped the track flipped
>> three times and and hit a wall.
>>
>> Seth Miller
>>
>>
>> On Thursday, December 11, 2014, Sameer Choudhari <
>> sameer.choudhari@xxxxxxxxx> wrote:
>>
>>> Hi Karth
>>>
>>> Thanks for your tips. I will try to use Option-1 and update once
>>> implemented.
>>>
>>> Cheers,
>>> Sameer <http://www.choudhari.net>
>>>
>>> On 10 December 2014 at 21:41, Karth Panchan <keyantech@xxxxxxxxx> wrote:
>>>
>>>> Sameer
>>>>
>>>> I have handled with 10TB database in compressed at Banking.
>>>>
>>>> You can approach this 2 way.
>>>>
>>>> Option 1:(steps in sequences)
>>>>
>>>> Create table New with Partition definition goes here
>>>> As select * from current table.
>>>>
>>>> Rebuild indexes with new name
>>>> Create Ref constraints with disable with new name.
>>>> Create triggers with new name
>>>> (Above 3 steps you need to take definitions from Dictionary tables for
>>>> your current table)
>>>>
>>>> Verify record count between New and current table
>>>>
>>>> When match then
>>>>
>>>> Rename current table as backup
>>>> Rename new as current
>>>> Disable backup table Ref constraints
>>>> Drop/disable triggers on backup table
>>>>
>>>> End
>>>>
>>>> In this approach we need extra 600GB but safe method for rollback
>>>> option or anything goes wrong you have backup. Instead rename you can Drop
>>>> backup table to save storage with caution. You can compress old data
>>>> partitions to save storage.
>>>>
>>>> Option 2:
>>>>
>>>> This approach assume all data in this table are History+Current.
>>>> History data which never have DML activities.
>>>>
>>>> Create table New with Partition definition goes here
>>>> As select * from current table
>>>> Where date < your current dataset date.
>>>>
>>>> Create table Staging (non-partition but same structure as current)
>>>> As select * from current table where 1=2
>>>>
>>>> Here you will pre- stage New table with history data. Run job
>>>> daily/weekly to load current data to Staging table and do partition swap.
>>>>
>>>> Finally data sync is done you can rename tables.
>>>>
>>>> I implemented both solutions.
>>>>
>>>> Hope this helpful to you.
>>>>
>>>> Karth
>>>>
>>>>
>>>> On Dec 10, 2014, at 4:10 AM, Sameer Choudhari <
>>>> sameer.choudhari@xxxxxxxxx> wrote:
>>>>
>>>> Hello Oracle-L
>>>>
>>>> Oracle DB : 10.2.0.4
>>>> OS            : IBM AIX
>>>> DB SIZE     : 5TB
>>>> Table Size : 600GB
>>>> Application : Baking System - OLTP. Busy system.
>>>>
>>>> I have to RANGE partition on date field. Looking at the size of the
>>>> table, I am confused which method to follow. In the past I have performed
>>>> using DBMS_REDEFINITION and IMP-EXP feature for small tables i.e. <50GB.
>>>> This time, I am worried by looking at the size of the table. Could you
>>>> please suggest better or reliable way partitioning with minimum downtime or
>>>> no downtime.?
>>>>
>>>> Cheers,
>>>> Sameer <http://www.choudhari.net>
>>>>
>>>>
>>>

Other related posts: