How one configures parallelism depends on the objective. For example, if one wants to get consistent performance, then probably setting a fixed degree on the object would work best. The down side to this is that on an idle system it will only use a fixed number of slaves to do the work, thus likely leaving some CPU cycles to go unused. Generally this is what I've seen used in most data warehouses because it give the most predictable/consistent response times. The DBA generally has a feeling of the number of concurrent queries, the average DOP used, and the amount of resources (CPU) it takes and manages it accordingly.
I personally haven't seen much use of parallel_adaptive_multi_user in production environments. Most often the reasoning is because the inconsistent response times. Assuming a default DOP on the objects, when a single query runs, it can get get close to using all the the CPU available so the response time is minimized. Now consider 10 queries using PQ running and this user starts the 11 query. The adaptive algorithm will more then likely trim down the DOP because the other 10 queries are consuming a majority of the CPU resources. The elapsed time for this query is likely to be several times that of the single query time. The other notable is that the DOP is chosen when the query starts and is fixed for the duration of the execution. This could create a scenario where the host is very busy when a PQ query is started and a low DOP is automatically chosen, moments afterward, the other queries finish and now only a small percentage of CPU is being consumed and the DOP can not be increased on an in-flight query. Compare this to a scenario where a fixed DOP is used and the kernel is left to schedule/queue the processes. The latter may result in a more desirable scenario.
In the end it comes down to if you want Oracle to manage PQ processes or have the kernel scheduler to do so.
One comment I would make about CPU_COUNT is that from what I've seen, hyperthreading doesn't provide twice as much CPU power, thus it may be better to manually set CPU_COUNT to be just the number of cores. This would mainly come into play when a default DOP is used or a high number of PQ queries are being executed.
Regards, Greg Rahn http://structureddata.org -------- Original Message -------- Subject: Thoughts on Parallelization From: "Don Seiler" <don@xxxxxxxxx> To: oracle-l <oracle-l@xxxxxxxxxxxxx> Date: 5/15/2007 1:48 PM
I've been doing some reading on parallelization, after seeing Doug Burns' presentation on the Pythian blog a while back. In it, Doug mentioned that he doesn't set any degree of parallelization on tables or indexes, and relies on query hints to use parallel query. I'm wondering what others might think of this, or if relying on Oracle's multi-user adaptive logic to control things is fine. I realize that there might be some extreme cases, but I'm wondering if you people are happily using the default parallelism on your tables and indexes and living life to the fullest. Also, what about the equation for setting parallel_max_servers. The documentation suggests: CPU_COUNT x PARALLEL_THREADS_PER_CPU x (2 if PGA_AGGREGATE_TARGET > 0; otherwise 1) x 5 My CPU_COUNT is 8, but that is due to linux hyperthreading on 4 physical CPUs. So even then I'm looking at 4 x 2 x 2 x 5 = 80 paralllel_max_servers. (Running 10.2.0.2 on RHEL3). I'm not sure what to expect so I'd like to see what others are using in their setups, if you're willing to share.