Re: parallel select

  • From: <genegurevich@xxxxxxxxxxxx>
  • To: Sergey Popov <sspopov@xxxxxxxxx>
  • Date: Tue, 14 Jul 2009 11:36:57 -0500

Thank you, Sergey:

I did try increasing the MBRC to 128, but the timing did not change.

thank you

Gene Gurevich




                                                                       
             Sergey Popov                                              
             <sspopov@xxxxxxxx                                       
             m>                                                         To
                                       greg@xxxxxxxxxxxxxxxxxx       
             07/14/2009 10:52                                           cc
             AM                        genegurevich@xxxxxxxxxxxx,    
                                       oracle-l@xxxxxxxxxxxxx        
                                                                   Subject
                                       Re: parallel select             
                                                                       
                                                                       
                                                                       
                                                                       
                                                                       
                                                                       




Sounds like you are doing it via SQL*Plus. This is not the fastest way to
extract data into a flat file. You can speed up the process in SQL*Plus is
you increase you MBRC (up to 128) and arraysize up to 1000.
I would recommend getting a proc*C code from asktom, modify
delimiter/enclosing character to your needs, compile and run it this way. I
used it back in January for ~3TB extract via dual 1GigE on NFS (both DB and
target filesystem sharing the same pipes). It was done in 2 days.

Good luck!
Sergey

On Mon, Jul 13, 2009 at 7:05 PM, Greg Rahn <greg@xxxxxxxxxxxxxxxxxx> wrote:
  On Mon, Jul 13, 2009 at 2:42 PM, <genegurevich@xxxxxxxxxxxx> wrote:
  > I am watching a very long select * from a tables with about 500mil
  rows. It
  > runs under oracle 10.2.0.3 for hours how. The table is about 35-40G
  > and the app team needs to dump it into a flat file. My DB block size is
  8K,
  > the extent size (this is a LMTS) is 300M.

  That is quite a large extent size for a 40GB table.  Generally I would
  recommend auto allocate and just use a large initial/next if you want
  to bypass the 64KB/1MB/... extent ranges and jump to the larger
  extents.  Using such a large extent size could result in nearly 300MB
  being wasted from jumping above the HWM with a direct path load.

  > I have been watching the v$session_longops and the message that I have
  been
  > seeing looks something like this:
  >
  > Rowid Range Scan:  TABLE_NAME : 2899 out of 12647 Blocks done
  >
  > The table has about 5mil blocks. 12647 blocks is about one third of one
  > extent. I run it with DOP=4. So it looks like the parallel slaves are
  > each reading from one third of one extent at a time.

  The PX granule size is 100MB, which happens to be 1/3 of the table
  extent size of 300MB.  A granule is the unit of work for PX severs.

  > This operation seems
  > to take about 40 minutes. Extrapolating to 5mil blocks yields a
  > huge number with will not be satisfactory to the application.  Why
  would it
  > take 40 minutes to read 100M worth of data? What could
  > be the bottleneck?

  I would recommend that you:
  1) look at the ASH report see what the session waits are or
  2) look at the wait events for the PX servers in v$session and see what
  they are

  No need to guess what the bottleneck would be when the wait interface
  will tell you.

  Things to consider:
  - what is your MBRC?  For FTS operations like this, 1MB is ideal (8KB
  block * 128 MBRC).  This should be the default if left unset.
  - the speed of the filesystem you are writing to (hopefully a local FS)
  - sqlplus settings (arraysize, trimspool, etc)

  --
  Regards,
  Greg Rahn
  http://structureddata.org
  --
  //www.freelists.org/webpage/oracle-l





Please consider the environment before printing this email.


--
//www.freelists.org/webpage/oracle-l


Other related posts: