Re: Huge Table Insert - take long time?

  • From: Mladen Gogala <gogala.mladen@xxxxxxxxx>
  • To: oracle-l@xxxxxxxxxxxxx
  • Date: Fri, 27 Jul 2018 14:43:06 -0400

Why? Guessing is fun. It's like fantasy football, only with other people's databases.


On 07/27/2018 01:19 PM, Insights wrote:

Stop guessing and get the awr, ash, and sql explain plan at the very least.

Sent from my iPhone

On Jul 26, 2018, at 7:58 PM, Woody McKay <woody.mckay@xxxxxxxxx <mailto:woody.mckay@xxxxxxxxx>> wrote:

Ya got me thinking about the "load data by partition" idea...

I wonder if instead of insert dop 16 from select dop 16, what if I have say 50 nonpartitioned tables (one for every partition) and dop4 insert into the from *_old partition () all at the same time. Then do a partition exchange from the 50 nonpartitioned tables to the main fully partitioned table?  The exchange is quick since it is just a data dictionary change.

This may be faster because the amount of DOP is higher. Maybe less exchange waits?  I'm assuming that the DOP 16 can't work on all 50 partitions all together as efficiently as loading all 50 nonpartitionedtables?

Got me thinking...




On Mon, Jul 23, 2018 at 8:15 PM Karthikeyan Panchanathan <keyantech@xxxxxxxxx <mailto:keyantech@xxxxxxxxx>> wrote:

    What is the partition type involved here?

    One option you can load data by partition into intermediary
    table(with new columns) and exchange. This way you can run
    parallel sessions for each partition and before exchange new
    table you can build at partition level.

    You mentioned other db, have you thought about moving data from
    other db to prod using data-pump?

    Karth

    Sent from my IPhone

    On Jul 23, 2018, at 7:52 PM, Woody McKay <woody.mckay@xxxxxxxxx
    <mailto:woody.mckay@xxxxxxxxx>> wrote:

    Hi gurus, I'm fishing for creative thoughts here...

    12.1.0.2 RAC on Exadata with forced logging for standby

    There's a refactoring process: (subpartitioned tables with
    millions to a few billions of rows)

    1. rename the main table to main_old
    2. create the new_main table (no indexes or constraints at this
    point)
    3. set table prefs for concurrent incremental stats
    4. enable parallel dml
    5. insert into new_main from main_old using APPEND PARALLEL 16
    (note: most tables are compressed)
    6. create a PK or an index (as needed) on new_main
    7. gather table stats on new_main

    The insert in Step 5 took 9.5 hours... Total refactor process
    time was about 13 hours.

    Is there anything you can think of to make this process any faster?

    I assume the APPEND hint isn't doing much because the DB has
    FORCE LOGGING. Guessing alter table nologging wouldn't do much
    either. Can't exchange from old to new since new mostly likely
    had columns added.

    I've mentioned using alter table if we are only adding a few
    columns, but the production DBA's are worried about
    chained-rows, fragmentation, possible explain plan changes, etc.
    I'm not sure if they are justified in their thoughts.

    I've thought of doing the refactor in a diff DB (where I can
    truly have nologging) and use transportable tablespace, but
    don't think that would be any better.

    Any Thoughts or ideas to try?

    Thanks, as always - Woody




--
Sincerely,

WoodyMcKay

--
Mladen Gogala
Database Consultant
Tel: (347) 321-1217

Other related posts: