Are these all bigfile tablespaces, do all the files have
dba_data_files.increment_by set explicitly to 10g (i.e. 1,310,720 if you're
using 8KB blocks).
Is the parameter "_enable_extent_preallocation" set to 3 on production and
Do the two systems have significantly different numbers of CPUs - which
could affect the number of Wnnn processes.
Do you get any clues about the small allocations being handled by
foreground processes while the large ones are handled by Wnnn?
I can't explain why you're seeing the effect, but I could imagine that even
a 10G (unit) allocation being shared out in units of 64MB across multiple
Wnnn processes thanks to some algorithm that was checking concurrency,
load, and CPU usage.
On Thu, 17 Jun 2021 at 09:28, Laurentiu Oprea <laurentiu.oprea06@xxxxxxxxx>
Does anyone have experienced the behavior of big file tablespace resize
operations in chunks of 64M with a very large number of resize operations?
If yes, how did you mitigate this behavior?
I see some tablespaces being incremented with over 15k Resize operations
of 64M in size but sometimes times defined size by autoextend on next (10G)
The problem is the huge number of resize operations are replied in the
standby database and looks to struggle to handle both redo apply and resize
operations creating significant apply lag .