reduce table fragmentation by design

  • From: LS Cheng <exriscer@xxxxxxxxx>
  • To: Oracle Mailinglist <oracle-l@xxxxxxxxxxxxx>
  • Date: Tue, 25 Jan 2011 21:09:44 +0100

Hi

We have a database which receives statistics data every 5 minutes and this
data goes to a heap table, not partitioned because this is Standard Edition
10.2.0.4.

The table has around 50 million of rows, most data are inserted using insert
into select........., around 80%, the rest 20% plain insert into...
values....

The table is formed by a surrogate key, a timestamp and several columns
which stores numbers, every week or so there is a purge process whcih
retains 45 days data, the column used to purge is the timestamp.

The problem is after sometime the table is so fragmentated that to read
10000 rows through 10000 index scans each row is stored in different data
blocks. So basically for each 10000 scan 10000 blocks must be read (plus the
index scan but that is little, only 60 blocks or so). The way to relieve
this performance problem is rebuild the table and the indexes.

I wonder if following would help to reduce fragmentation, in the insert...
select statement add an order by timestamp clause so all rows are inserted
in order, since the purge process is based on timestamp this will remove
rows sequentially (located in same extent).

I have some other ideas such as shrink table compact space once a couple of
times a day and shrink the table once 2 days, then every month rebuild the
table once using alter table move.

I am also thiking to move the table to manually segment space management
with old freelists and pctused.


Thanks

--
LSC

Other related posts: