RE: [Fwd: Re: I may never see this again. SGA]

  • From: DEEDSD@xxxxxxxxxxxxxx
  • To: oracle-l@xxxxxxxxxxxxx
  • Date: Mon, 21 Jun 2004 15:40:33 -0400




As with some of the idle waits, it depends.  In my case, where the database
is spending 99.5 % of it's time waiting on PX Deq waits, they aren't idle
anymore.  What my customers are doing is they have fact tables in thousands
of partitions, each with default parallel degrees set to eight and
parallel_max_servers set to the maximum recommended (384 or something
ridiculous).  They then havefive or ten queries kick off, each of them
spawning between 8 and 16 slaves, and the box gets buried in a hurry, even
with parallel_automatic_tuning turned on.

I seem to recall the cache buffers chains latches are the latches that
protect the chain of hash buckets the buffers reside in.  My guess is that
I'm seeing tons of cache buffers chains latching because quite a few
parallel slaves are trying to hit the same latch simultaneously.  THat's my
current guess, I have been too busy with other things to actually verify
that's what's going on.


                                                                                
                                  
                          "Koivu, Lisa"                                         
                                  
                          <Lisa.Koivu@Cendant-TRG  T                            
                                  
                          .com>                    To:   
<oracle-l@xxxxxxxxxxxxx>                                 
                                                   cc:                          
                                  
                          Sent by:                                              
                                  
                          oracle-l-bounce@freelis  bcc:                         
                                  
                          ts.org                   Subject:                     
                     RE: [Fwd:    
                                                   Re: I may never see this 
again.  SGA]                          
                                                                                
                                  
                                                                                
                                  
                          06/15/2004 03:43 PM                                   
                                  
                          Please respond to                                     
                                  
                          oracle-l                                              
                                  
                                                                                
                                  
                                                                                
                                  




I swear I thought I read that px deq waits were idle waits.  Do you
still see these waits with parallel automatic tuning on?  I'm not
suggesting that turning it all to automatic is the answer, but it seems
to work rather well on the box I'm testing.  Maybe that's because I
don't know better.

My impression of cache buffers chains waits is processes fighting over a
buffer.  Am I wrong?  With the larger buffer cache I've been testing,
these waits have nearly gone away during data loads.

Comments?

-----Original Message-----
From: DEEDSD@xxxxxxxxxxxxxx [mailto:DEEDSD@xxxxxxxxxxxxxx]
Sent: Tuesday, June 15, 2004 8:27 AM
To: oracle-l@xxxxxxxxxxxxx
Subject: Re: [Fwd: Re: I may never see this again. SGA]





This system is a wonderful example of something having 'an infinite
capacity to wait'.

My best solution is to club the consultants that suggested default
parallel
degrees between 8 and 16 be placed on the tables, then club the
developers
that allowed it.  Following the clubbing, immediate sacking is
recommended.
Then, cut down the default parallelism to a reasonable level if people
still insist on using parallel query, then redesign the ~550 tables that
have 180+ partitions each into a good logical and physical design and
hire
developers that know what they are doing to fix the application.

After all that is done, the PX Deq waits and cache buffers chains waits
will take care of themselves....

I think you know the database of which I speak, Joe...




                          <jtesta@xxxxxxxxxx>

                                                   T

                          Sent by:                 To:
<oracle-l@xxxxxxxxxxxxx>
                          oracle-l-bounce@freelis  cc:

                          ts.org

                                                   bcc:

                                                   Subject:
[Fwd: Re: I
                                                   may never see this
again.  SGA]
                          06/14/2004 04:41 PM

                          Please respond to

                          oracle-l










Ok i'll bite, whats the solution for the PX Deq and Cache Buffer chains
waits?

joe



original message below>
>

 Bah.

 On a 24-CPU sun box w/96 GB of memory, one 2 TB database.  You should
 see the PX Deq and cache buffers chains waits!!  Completely obscene.
 It's a train wreck.  But, we have to do what the customers demand...

 Connected to:
 Oracle9i Enterprise Edition Release 9.2.0.4.0 - 64bit Production With
 the Partitioning, OLAP and Oracle Data Mining options
 JServer Release 9.2.0.4.0 - Production

 SQL> show sga

 Total System Global Area 7.2311E+10 bytes
 Fixed Size                   835056 bytes
 Variable Size            2499805184 bytes
 Database Buffers         6.9810E+10 bytes
 Redo Buffers                 319488 bytes





----------------------------------------------------------------
Please see the official ORACLE-L FAQ: http://www.orafaq.com
----------------------------------------------------------------
To unsubscribe send email to:  oracle-l-request@xxxxxxxxxxxxx
put 'unsubscribe' in the subject line.
--
Archives are at //www.freelists.org/archives/oracle-l/
FAQ is at //www.freelists.org/help/fom-serve/cache/1.html
-----------------------------------------------------------------



----------------------------------------------------------------
Please see the official ORACLE-L FAQ: http://www.orafaq.com
----------------------------------------------------------------
To unsubscribe send email to:  oracle-l-request@xxxxxxxxxxxxx
put 'unsubscribe' in the subject line.
--
Archives are at //www.freelists.org/archives/oracle-l/
FAQ is at //www.freelists.org/help/fom-serve/cache/1.html
-----------------------------------------------------------------

"The sender believes that this E-Mail and any attachments were free of any
virus, worm, Trojan horse, and/or malicious code when sent. This message
and its attachments could have been infected during transmission.  By
reading the message and opening any attachments, the recipient accepts full
responsibility for taking proactive and remedial action about viruses and
other defects. The sender's business entity is not liable for any loss or
damage arising in any way from this message or its attachments."

----------------------------------------------------------------
Please see the official ORACLE-L FAQ: http://www.orafaq.com
----------------------------------------------------------------
To unsubscribe send email to:  oracle-l-request@xxxxxxxxxxxxx
put 'unsubscribe' in the subject line.
--
Archives are at //www.freelists.org/archives/oracle-l/
FAQ is at //www.freelists.org/help/fom-serve/cache/1.html
-----------------------------------------------------------------



----------------------------------------------------------------
Please see the official ORACLE-L FAQ: http://www.orafaq.com
----------------------------------------------------------------
To unsubscribe send email to:  oracle-l-request@xxxxxxxxxxxxx
put 'unsubscribe' in the subject line.
--
Archives are at //www.freelists.org/archives/oracle-l/
FAQ is at //www.freelists.org/help/fom-serve/cache/1.html
-----------------------------------------------------------------

Other related posts: