Re: When Parallel Doesn't Happen

  • From: Kellyn Pedersen <kjped1313@xxxxxxxxx>
  • To: Daniel Fink <daniel.fink@xxxxxxxxxxxxxx>
  • Date: Thu, 1 Oct 2009 12:00:33 -0700 (PDT)

Hey Dan,
Now I could see raising parallel_max_servers if I was hitting the highwater or 
lowering it if someone hadn't calculated the amount vs. number of CPU's, etc.  
What I'm seeing is I have a setting of 168 for parallel_max_servers and yet a 
total of 48 parallel servers are in use-  when you look at the queries 
involved, it should have allocated 96, (and at no time was it at the ceiling!)
parallel_automatic_tuning=false on these systems and the amount of parallel 
servers allocated seem to vary depending on the load on the database.  I am 
working on a resource plan to put the higher priority jobs into a resource 
group that would get "first dibs" on the available slaves, but I didn't 
understand how dependent, (although it does make perfect sense to me in my 
head...) parallel allocation is on resources.  Not just CPU, but I/O, 
especially when sequential reads are involved on indexes and although it shows 
sequential reads for these-  partitions.  
I'm considering taking many of these from ASSM to manage the freelists 
manually, giving the heavier hit partitioned tables a little more allocated 
from the start...
Thoughts?
Kellyn

--- On Thu, 10/1/09, Daniel Fink <daniel.fink@xxxxxxxxxxxxxx> wrote:


From: Daniel Fink <daniel.fink@xxxxxxxxxxxxxx>
Subject: Re: When Parallel Doesn't Happen
To: kjped1313@xxxxxxxxx
Cc: oracle-l@xxxxxxxxxxxxx
Date: Thursday, October 1, 2009, 12:47 PM


Kellyn,

I've been working a similar problem. One 'working hypothesis' is that when a 
session remains connected with an open cursor, the parallel server associated 
with that session is not released. In this case, some of these sessions have 
been idle for hours. Unfortunately, since we first came up with this hypothesis 
and a workaround (temporarily raise parallel_max_servers for a stats gathering 
job) was implemented, the problem has not come back. One friend, who is very 
knowledgeable about PQ, has seen the same behaviour (he's Scottish...so the 
spelling is correct for what he sees) coming from Toad.

PQ sounds like a great idea, but the actual user implementation and ability to 
monitor what is actually happening is woefully inadequate. I do enjoy the 
responses when I turn off parallelism and the full table scan with 1 process 
runs significantly faster than the previous version with 8 slaves. One issue 
that I often see is that a small degree of parallelism is used in dev/test with 
only that process running. And it works just fine. But you take it to 
production with 40 processes all runing a degree of 8 and things slow to a 
crawl.

Regards,
Daniel Fink
-- 
Daniel Fink

OptimalDBA    http://www.optimaldba.com
Oracle Blog   http://optimaldba.blogspot.com

Lost Data?    http://www.ora600.be/
Kellyn Pedersen wrote: 





I have just started for a company that applies to the philosophy of too much of 
a good thing is really a good thing, so bear with me...
 
The code has parallel hints everywhere- degrees often set to 8 in multiple 
hints in one DML or DDL set, (parallel_threads_per_cpu=2, so start doing the 
math...)  The parallel_server is set  anywhere from 96 to 168 and someone had 
the idea that as long as they set the threshold high enough, everything would 
run.  
 
I have never seen parallel used the way it is here and I've come across some 
very interesting challenges.  The amount of slaves being allocated to a process 
are being downgraded as resources are becoming limited by the poor choices made 
in some of these simultaneous requests.
 
queries parallelized                              10443
DDL statements parallelized                    106
DFO trees parallelized                     10549
Parallel operations not downgraded   10457
Parallel operations downgraded to serial 19
Parallel operations downgraded 75 to 99 pct 0
Parallel operations downgraded 50 to 75 pct 0
Parallel operations downgraded 25 to 50 pct 92
Parallel operations downgraded 1 to 25 pct 0
PX local messages sent 205011362
PX local messages recv'd 230002397
PX remote messages sent 0
PX remote messages recv'd 0
 





      

Other related posts: