Re: parallel_max_servers and the number of sessions involved in a SQL

  • From: Lothar Flatz <l.flatz@xxxxxxxxxx>
  • To: oracle-l@xxxxxxxxxxxxx
  • Date: Mon, 30 Nov 2020 12:15:34 +0100

There is the more practical aspect that I would in general strongly discourage from using default parallism.
The calculated DOP is usually far higher than sensible . Besides I prefer a stable behaviour that does not depend on parameters that might be changed without the consequences fully understood.

Regards

Lothar
Am 29.11.2020 um 13:54 schrieb Mikhail Velikikh:

40 processes per CPU sounds like the default value of parallel_max_servers: https://docs.oracle.com/database/121/REFRN/GUID-B6CE9EA3-B846-4AA7-B668-FD3BEE433A24.htm#REFRN10158 <https://docs.oracle.com/database/121/REFRN/GUID-B6CE9EA3-B846-4AA7-B668-FD3BEE433A24.htm#REFRN10158>

|PARALLEL_THREADS_PER_CPU| * |CPU_COUNT| * |concurrent_parallel_users| * 5

    The number of concurrent parallel users running at default degree
    of parallelism on an instance depends on the memory initialization
    parameter settings for the instance. For example, if the
    |MEMORY_TARGET| or |SGA_TARGET| initialization parameter is set,
    then the number of |concurrent_parallel_users| = |4|. If neither
    |MEMORY_TARGET| or |SGA_TARGET| is set, then
    |PGA_AGGREGATE_TARGET| is examined. If a value is set for
    |PGA_AGGREGATE_TARGET|, then |concurrent_parallel_users| = |2|. If
    a value is not set for |PGA_AGGREGATE_TARGET|, then
|concurrent_parallel_users| = |1|.

PARALLEL_THREADS_PER_CPU = 2
concurrent_parallel_users = 4
It results in exactly 40 (=2 * 4 * 5).

However, I don't see that this limit is honored in my 19.9 database, so that I am able to allocate more parallel processes despite the fact that documentation says about some adjustments:
https://docs.oracle.com/en/database/oracle/oracle-database/19/refrn/PARALLEL_MAX_SERVERS.html#GUID-B6CE9EA3-B846-4AA7-B668-FD3BEE433A24 <https://docs.oracle.com/en/database/oracle/oracle-database/19/refrn/PARALLEL_MAX_SERVERS.html#GUID-B6CE9EA3-B846-4AA7-B668-FD3BEE433A24>

    The lower of the two values is used as the default value of
    |PARALLEL_MAX_SERVERS|, and if you attempt to explicitly set
    |PARALLEL_MAX_SERVERS| to a value that is higher than either of
    the values, then the setting is adjusted to the lower of the two
    values.

The adjustment can be about reserved processes and entries such as these in the alert log (Second Alert_<Sid>.Log In ORACLE_HOME/dbs Directory Next To The Standard Alert Log Under ORACLE_BASE/admin (Doc ID 1322075.1)):

    Sun Feb 20 11:33:46 PST 2011
    Adjusting the default value of parameter parallel_max_servers


There are really two issues here:
1. the wrong results issue that Jonathan Lewis mentioned. It might be something like this: (DB36) Bug 20509482 - ORA-600 [3020], ORA-752 Wrong Results or RMAN ORA-600 [krcrfr_nohist] after Parallel Direct Load in RAC (caused by fix for bug 9962369) (Doc ID 2139374.1)
2. the DOP downgrade issue. That is a separate thing. I usually analyze the tracefile: alter session set events 'trace[px_scheduler]'; but it can be approached differently: How to View Why the Degree Of Parallelism (DOP) was Downgraded for an SQL in 12c (Doc ID 2011375.1)



On Sun, 29 Nov 2020 at 12:30, Jonathan Lewis <jlewisoracle@xxxxxxxxx <mailto:jlewisoracle@xxxxxxxxx>> wrote:


    This is why I asked you about all your parallel parameters and
    what parameters you were leaving to default.
    In 19.3 (for example) if you don't set "processes" then the
    default number of processes is "80 * CPU_count + 40" - which
    almost looks like Oracle deciding that it has to have 40 processes
    for the critical background processes and a maximum of 80
    processes per CPU is a sensible limit (for an OLTP system).

    In the same vein it's perfectly reasonable for someone in Oracle
    to decide that if 80 processes per CPU is sensible for "normal"
    processing then 40 per CPU is equally sensible for the
    "batch-like" processes of parallel execution. In fact they might
    be thinking in terms of the impact of 20 batch-like processes per
    CPU on the assumption that DOP 20 usually gets 40 processes but
    only 20 of them are likely to be very busy at any one instant.

    Regards
    Jonathan Lewis


    On Sat, 28 Nov 2020 at 17:15, ahmed.fikri@xxxxxxxxxxx
    <mailto:ahmed.fikri@xxxxxxxxxxx> <ahmed.fikri@xxxxxxxxxxx
    <mailto:ahmed.fikri@xxxxxxxxxxx>> wrote:

        sorry my bad. Indeed, there is a correlation with the process
        number and the pga (which also logical is):

        After setting the pga_aggregate_target to 10M  (an extreme
        value) and processes to 1500, I got this correlation:

        cpu (host) n_max
        1              40
        2              80
        3             120
        4             160
        5              200
        6              240
        7              280
        8              320

        The question now is where the value 40 comes from. (I'll also
        try this test on 11.2)



Other related posts: