40 processes per CPU sounds like the default value of parallel_max_servers:
https://docs.oracle.com/database/121/REFRN/GUID-B6CE9EA3-B846-4AA7-B668-FD3BEE433A24.htm#REFRN10158
PARALLEL_THREADS_PER_CPU * CPU_COUNT * concurrent_parallel_users * 5
The number of concurrent parallel users running at default degree of
parallelism on an instance depends on the memory initialization parameter
settings for the instance. For example, if the MEMORY_TARGET or SGA_TARGET
initialization parameter is set, then the number of
concurrent_parallel_users = 4. If neither MEMORY_TARGET or SGA_TARGET is
set, then PGA_AGGREGATE_TARGET is examined. If a value is set for
PGA_AGGREGATE_TARGET, then concurrent_parallel_users = 2. If a value is
not set for PGA_AGGREGATE_TARGET, then concurrent_parallel_users = 1.
The lower of the two values is used as the default value ofThe adjustment can be about reserved processes and entries such as these in
PARALLEL_MAX_SERVERS, and if you attempt to explicitly set
PARALLEL_MAX_SERVERS to a value that is higher than either of the values,
then the setting is adjusted to the lower of the two values.
Sun Feb 20 11:33:46 PST 2011
Adjusting the default value of parameter parallel_max_servers
This is why I asked you about all your parallel parameters and what
parameters you were leaving to default.
In 19.3 (for example) if you don't set "processes" then the default
number of processes is "80 * CPU_count + 40" - which almost looks like
Oracle deciding that it has to have 40 processes for the critical
background processes and a maximum of 80 processes per CPU is a sensible
limit (for an OLTP system).
In the same vein it's perfectly reasonable for someone in Oracle to decide
that if 80 processes per CPU is sensible for "normal" processing then 40
per CPU is equally sensible for the "batch-like" processes of parallel
execution. In fact they might be thinking in terms of the impact of 20
batch-like processes per CPU on the assumption that DOP 20 usually gets 40
processes but only 20 of them are likely to be very busy at any one instant.
Regards
Jonathan Lewis
On Sat, 28 Nov 2020 at 17:15, ahmed.fikri@xxxxxxxxxxx <
ahmed.fikri@xxxxxxxxxxx> wrote:
sorry my bad. Indeed, there is a correlation with the process number and
the pga (which also logical is):
After setting the pga_aggregate_target to 10M (an extreme value) and
processes to 1500, I got this correlation:
cpu (host) n_max
1 40
2 80
3 120
4 160
5 200
6 240
7 280
8 320
The question now is where the value 40 comes from. (I'll also try this
test on 11.2)