Re: Partitions expanding above the HWM

  • From: Kellyn Pedersen <kjped1313@xxxxxxxxx>
  • To: Thomas Roach <troach@xxxxxxxxx>
  • Date: Tue, 12 Jan 2010 08:17:57 -0800 (PST)

Thank you for your patience and kindness with me today...:)
OK, so the difference seems to be in the full pull vs. the incremental.  There 
must be something different about these-  in the code?  In the concurrency, the 
way it load balances?
How is it different if this seems to be the trigger point?

Kellyn Pedersen
Multi-Platform DBA
I-Behavior Inc.
"Go away before I replace you with a very small and efficient shell script..."

--- On Tue, 1/12/10, Thomas Roach <troach@xxxxxxxxx> wrote:

From: Thomas Roach <troach@xxxxxxxxx>
Subject: Re: Partitions expanding above the HWM
To: "Kellyn Pedersen" <kjped1313@xxxxxxxxx>
Cc: "Surachart Opun" <surachart@xxxxxxxxx>, "Oracle Discussion List" 
Date: Tuesday, January 12, 2010, 8:35 AM

All good points :)... 
So we have 7 transaction based servers. We have a pull that hits each server, 
so 7 jobs pull this data into a staging tables that is partitioned by server 
name. No problem so far. Once the data is loaded there and the transformations 
happen, the polished data is inserted into the reporting tables and then the 
old records are removed. We only remove old records if that device has uploaded 
new records. One of the tables the issue is with is hash partitioned by 
customer ID. I just looked and last night, my tables reused the existing space. 
It seems about once a week when they do a full pull/versus incremental that we 
see the table size take off. If allowed to go for several weeks, we begin to 
see a pattern. The amount of real space within the 640GB segment is roughly 
17GB. Even if we were not commiting our data and the space is not being reused, 
would it explain over 97% of the space not being reused? I'm baffled, but also 
I am having trouble trying to
 replicate the issue. I went back through the process and they are running the 
same package to load the data whether it be a full load or an incremental load. 
I don't see any parallel_dml. No direct path loads. I don't see any append 
hints. I don't find anything in metalink.
Starting to think I should probably open a SR at this point :)

On Tue, Jan 12, 2010 at 10:23 AM, Kellyn Pedersen <kjped1313@xxxxxxxxx> wrote:

OK, I know I'm going off very little sleep this last week, so forgive me if I'm 
just oblivious today...:)
The main issue is, you are concerned about the free space not be re-utilized by 
the next insert after the delete...

"INSERT INTO table SELECT * FROM staging_table"
but I have to ask-  Where are you committing your transactions, both after the 
inserts and the deletes?  
If this is running in parallel, until a commit is issued, this isn't goint to 
be seen as freespace and I would foresee continual fragmentation in the 
object.  I've always recommended bulk deletes of data separate from and data 
loads, whenever possible.  Even with a change in the load process, I still 
often experience blocks that aren't re-used if there is high parallel inserts 
and deletes on an object.
Again, apologies if I'm missing the point here, but until I perform the steps 
that Surachart listed below, I wouldn't expect the HWM to shrink and I would 
expect it to grow due to the logic in the code. 
Kellyn Pedersen
Multi-Platform DBA
I-Behavior Inc.
"Go away before I replace you with a very small and efficient shell script..."

--- On Mon, 1/11/10, Thomas Roach <troach@xxxxxxxxx> wrote:

From: Thomas Roach <troach@xxxxxxxxx> 

Subject: Re: Partitions expanding above the HWM
To: "Surachart Opun" <surachart@xxxxxxxxx>
Cc: "Oracle Discussion List" <oracle-l@xxxxxxxxxxxxx>
Date: Monday, January 11, 2010, 7:29 PM 

Thanks for the assistance. The blocksize for the whole DB is 16kb. The 
partition is hash by customer with a sub partition so the range I do not get 
help with, but it is definitely an option. What happens is the insert (and we 
have 7 processes running in parallel, so there is some contention on these 
heavy inserts), then each one cleans up the old copies of the records. We only 
delete old records when new ones are uploaded, so if we insert 50,000 records, 
and only 30,000 of them are updates, then it inserts all 50k, then drops the 
old 30k ones, so we could still have 20k from the old run. So as more records 
are inserted, the old blocks look like swiss cheese, but the old space does not 
appear to be used. With ASSM, I wonder if I am hitting some sort of bug because 
the operations I see don't tell me that it is avoiding previously used blocks.

On Mon, Jan 11, 2010 at 9:13 PM, Surachart Opun <surachart@xxxxxxxxx> wrote:

ASSM - PCTUSED is no need (ignore) 

If you use tablespace (ASSM), you can shrink segment (each partition)

COMPACT; -- shrink segment, but no reduce HWM
reduce HWM

This case may occur, if you have many insert sessions(and have session to 
delete HIGH values).
But if 640 GB and then shrunk to 17 GB

How about block_size in this tablespace? -- I have grew up like this on big 

I have some question - How about delete data process?
If you delete old day ... you may point to make table partition by RANGE (and 
then truncate old day)

Good Luck
Surachart Opun 

On Tue, Jan 12, 2010 at 5:55 AM, Thomas Roach <troach@xxxxxxxxx> wrote:

4 node RAC -
Linux x86_64 RHEL 5.3
ASSM on the Tablespaces (but the table has PCTFREE of 10 and PCTUSED of 0 
"which should be ignored because of ASSM."
Reporting Table
Partitioned by hash into 16 partitions.
Degree of parallel is 1.
Staging Table
Partitioned by server name (7 partitions)
Parallelization is enabled 
I am having an issue with a table that is just using way too much disk space. 
As of today, it was using 640GB. I shrunk it down to about 17GB by using the 
move partition command with update indexes. As data is inserted, it appears to 
I initially thought I had it when I saw PCTUSED 0, but then I found the 
tablespace was ASSM, so this value "should" be ignored. I found the package 
that loads this table from a staging table.
What happens is this.
"INSERT INTO table SELECT * FROM staging_table"
So, I set up a little test case, and I can't seem to replicate the data. This 
is not doing anything with APPEND in this package that does the ETL. I tested 
with parallel dml enabled at the session level and that yielded no results. 
Does anyone have any ideas what I might be missing? Could this be a bug with 
ASSM? Any help would greatly be appreciated as I have banged my head against 
the monitor all day (not literally). 

Thomas Roach

Thomas Roach


Other related posts: