Re: parallel select

  • From: Greg Rahn <greg@xxxxxxxxxxxxxxxxxx>
  • To: genegurevich@xxxxxxxxxxxx
  • Date: Tue, 14 Jul 2009 13:02:23 -0700

On Tue, Jul 14, 2009 at 11:34 AM, <genegurevich@xxxxxxxxxxxx> wrote:
> Thank you for the info. I found that hidden parameter for my own education.
> I did some testing with higher MBRC (from 32 to 128), but that did
> not change the timing. I also tried increasing arraysize to 5000, and that
> led to increase in time by about 30%. I am running again with lower
> value (500).

I mentioned arraysize 200 for a specific reason.  Not 2000, not 5000,
not 10000.  Increasing arraysize aids in throughput up until a certain
point at which the performance curve inverts.  More is not always
better.

I believe Tom's array flat pro*c program uses 100 for the array size,
though it can be specified on the command line.  I've found that 100
or 200 is generally the best place to be on the curve.

> As far as the qref latch, what can I do to handle it? The Suck it dry
> article says to increase the parallel_execution_message_size, but
> this may create issues if the consumer is taking time to process  the
> incoming data, which is probably the case here. Is there anything
> else I can do here?

Generally these are idle PX waits, although I would recommend setting
parallel_execution_message_size=16384
The default is pretty small in todays high memory hosts and this will
become the new default in the next Oracle release.

If the problem is the consumer slave set is slower than the producer
slave set, and it is just that way, than there is little that can be
done

-- 
Regards,
Greg Rahn
http://structureddata.org
--
//www.freelists.org/webpage/oracle-l


Other related posts: