Re: Issue with Parallel query execution

  • From: "Jonathan Lewis" <jonathan@xxxxxxxxxxxxxxxxxx>
  • To: <oracle-l@xxxxxxxxxxxxx>
  • Date: Fri, 23 Feb 2007 08:37:18 -0000


Govind,

It's probably not the number of rows in the table
that matters it's the number of blocks, as the unit
of granularity for PX slaves sharing data is the block.

If you have very small rows (and/or large blocks)
and you want to enforce a higher degree of parallelism
you could simple set a large value of pctfree for the
table so that it is forced to spread over more blocks
as you load it.


Regards

Jonathan Lewis
http://jonathanlewis.wordpress.com

Author: Cost Based Oracle: Fundamentals
http://www.jlcomp.demon.co.uk/cbo_book/ind_book.html

The Co-operative Oracle Users' FAQ
http://www.jlcomp.demon.co.uk/faq/ind_faq.html


----- Original Message ----- From: "FreeLists Mailing List Manager" <ecartis@xxxxxxxxxxxxx>
To: "oracle-l digest users" <oracle-l@xxxxxxxxxxxxx>
Sent: Friday, February 23, 2007 8:03 AM

Subject: RE: Issue with Parallel query execution
Date: Thu, 22 Feb 2007 09:30:34 -0600

Thanks Gary! Capacity is not a problem for us. We had a plenty of parallel threads available. Our intention was to reduce the elapsed time of the query. We are also looking into dynamically changing the parallel degree depending on the number of rows. My research suggests that it runs with a parallel degree of 4 when the number of rows is in excess of 5000. But I would set it to 4 if we have more than 1000 rows. Less than this, I am comfortable with running in serial mode. This approach will suit us fine as we have 15 of the concurrent threads running simultaneously working on 15 market groups.


--
//www.freelists.org/webpage/oracle-l


Other related posts: